All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is it possible to reconfigure Splunk to use _indextime instead of _time for data retention policy?
Why does the Approval settings work in some actions and not in others?
There are a number of ways to do this - to find which sourcetypes have zero events, create an event for each sourcetype with a zero count and add it to the count for the sourcetype, and where the cou... See more...
There are a number of ways to do this - to find which sourcetypes have zero events, create an event for each sourcetype with a zero count and add it to the count for the sourcetype, and where the count is still zero, there were no events for that sourcetype. | stats count by sourcetype | append [| makeresults format=csv data="sourcetype,count A,0 B,0 C,0 D,0 E,0 F,0" | table sourcetype count] | stats sum(count) as count by sourcetype | where count=0 | eval count="No events found"
That worked!! One last thing, how do I display only specific sourcetype out of (A B C D E) for where the events for all the days=0. reword this statement
https://conf.splunk.com/files/2020/slides/TRU1761C.pdf Here I found a good pdf ... in fact  starcher found and I found his post .  
when i try to install splunklib i am getting a error because of pycrypto ....so couldn't follow that
wow , the first link is a good source of knowledge   thanks a lot.   There is one more sql I need to implement in splunk but it is not present there. Maybe you could help . The most efficient way ... See more...
wow , the first link is a good source of knowledge   thanks a lot.   There is one more sql I need to implement in splunk but it is not present there. Maybe you could help . The most efficient way to inner join is something like : index=db OR index=app | eval join=if(index="db",processId,pid) | stats sum(rows) sum(cputime) by join But how to join two tables with multicolumn key  ?   SELECT * FROM mytable1 INNER JOIN mytable2 ON (mytable1.mycolumn= mytable2.mycolumn AND mytable1.mycolumn2= mytable2.mycolumn2)    
@nguyens  Thanks it worked...
I have the same issue and believe it is caused from the last cloud update.  I have note reported this as a bug yet.
Try implementing using the class model
There isn't an inbuilt feature in Splunk, which can do this. You can build a custom app with external lookups (scripts) and use something like google translate or other services to do this and transf... See more...
There isn't an inbuilt feature in Splunk, which can do this. You can build a custom app with external lookups (scripts) and use something like google translate or other services to do this and transform your data. To get started, you can disassemble this old app and then take it from there to build your own. https://splunkbase.splunk.com/app/1609
The one issue that I see with this approach is the transfer of data buckets from Cluster A to Cluster B. Every indexer creates buckets with its unique uuid. Transferring those buckets to a new cluste... See more...
The one issue that I see with this approach is the transfer of data buckets from Cluster A to Cluster B. Every indexer creates buckets with its unique uuid. Transferring those buckets to a new cluster where the cluster master has no idea as to who those buckets belong to would cause a massive headache for you. If you can re-ingest the data, that would solve your problem easily. Otherwise, I highly recommend involving Splunk support in this operation and get this guidance.
Hi If you are running splunk as splunk user, but use systemctl as root there is no need to add splunk into sudoers file! I have seen same kind of behavior on some AWS ec2 instances time by time. Ho... See more...
Hi If you are running splunk as splunk user, but use systemctl as root there is no need to add splunk into sudoers file! I have seen same kind of behavior on some AWS ec2 instances time by time. However I haven't ever need to look it why.  Hard to say which one is the root cause splunk or systemd, probably some weird combination will cause this. Have you log your OS logs like messages, audit etc. into splunk? If yes, then you could try to find reason from those. Another what you could try is "dmesg -T" and journalctl and look if those gives you more hints. r. Ismo
Hi Have you already seen this https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk? It could give some hints how things are done on Splunk vs SQL. BUT you shouldn't foll... See more...
Hi Have you already seen this https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk? It could give some hints how things are done on Splunk vs SQL. BUT you shouldn't follow this too much as how Splunk is working is totally different than in SQL. I suppose that there are many conf presentations which could help you to better understand how to work with Splunk. Some other good source of work with Splunk are: https://conf.splunk.com/watch/conf-online.html?locale=watch#/ https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/app/catalog/search?searchText=search%20beginner&selectedTab=LEARNINGEVENT&filter=%7B%22LEARNINGEVENTTYPEFACET%22:%7B%22label%22:null,%22values%22:%20%5B%7B%22facetValueId%22:%221%22,%22facetValueLabel%22:null%7D%5D%7D%7D https://docs.splunk.com/Documentation/Splunk/latest/Search/GetstartedwithSearch https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial https://www.youtube.com/results?search_query=splunk+bsides r. Ismo
Thanks fort the tips. As a workaround i made an override for the Splunk service, to receive a timeout after 4 minutes in stead of the default 6. We run it as root user, but concerning the sudoers... See more...
Thanks fort the tips. As a workaround i made an override for the Splunk service, to receive a timeout after 4 minutes in stead of the default 6. We run it as root user, but concerning the sudoers file: this is something for me to investigate. Maybe it has something to do with rights, because other application on Linux do not have this behaviour
It would be helpful if you could suggest a way to overcome this if that's the case
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results some... See more...
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results sometimes are amazing. Anyway I still feel not comfortable and experience a lot of problems with syntax and rules of splunk language .  I am thinking about a page/tutorial/blog/youtube channel , something like Splunk for DBA - relational DBA! . To read about theory , rules and syntax commands with examples  like stats, join, append, timecharts and other who can manipulate with multiple table indexes their relations and aggregations. Of course this community is a mine of examples and recipes but maybe there is a place where such topics are described and explained in more affordable structured way.  any ideas , hints K.  
My guess is that your custom command is getting the first chunk and stopping? Or perhaps your script is being called multiple times and you are perhaps overwriting the csv?
I am not specifically using any class type i just read the data from my stdin and writting directly to my csv
this is my script, sorry i am new to splunk, so not aware what exactly you are asking   import csv import os import sys def exportcsv(): try: if len(sys.argv) < 3: sys.exit(1) release = sys.... See more...
this is my script, sorry i am new to splunk, so not aware what exactly you are asking   import csv import os import sys def exportcsv(): try: if len(sys.argv) < 3: sys.exit(1) release = sys.argv[1] foldername = sys.argv[2] os.makedirs(folder_path, exist_ok=True) input_data = sys.stdin.read() rows = input_data.strip().split('\n') header = rows[0].split(',') data = [row.split(',') for row in rows[1:]] filepath = os.path.join(folder_path, filename) print(f"Writing to file: {filepath}", file=sys.stderr) with open(filepath, 'w', newline='') as file: writer = csv.writer(file) writer.writerow(header) writer.writerows(data) print("Status") print("Export Successful to " + filepath) except Exception as e: print(f"Error: {e}", file=sys.stderr) sys.exit(1) if __name__ == "__main__": exportcsv()