All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There isn't an inbuilt feature in Splunk, which can do this. You can build a custom app with external lookups (scripts) and use something like google translate or other services to do this and transf... See more...
There isn't an inbuilt feature in Splunk, which can do this. You can build a custom app with external lookups (scripts) and use something like google translate or other services to do this and transform your data. To get started, you can disassemble this old app and then take it from there to build your own. https://splunkbase.splunk.com/app/1609
The one issue that I see with this approach is the transfer of data buckets from Cluster A to Cluster B. Every indexer creates buckets with its unique uuid. Transferring those buckets to a new cluste... See more...
The one issue that I see with this approach is the transfer of data buckets from Cluster A to Cluster B. Every indexer creates buckets with its unique uuid. Transferring those buckets to a new cluster where the cluster master has no idea as to who those buckets belong to would cause a massive headache for you. If you can re-ingest the data, that would solve your problem easily. Otherwise, I highly recommend involving Splunk support in this operation and get this guidance.
Hi If you are running splunk as splunk user, but use systemctl as root there is no need to add splunk into sudoers file! I have seen same kind of behavior on some AWS ec2 instances time by time. Ho... See more...
Hi If you are running splunk as splunk user, but use systemctl as root there is no need to add splunk into sudoers file! I have seen same kind of behavior on some AWS ec2 instances time by time. However I haven't ever need to look it why.  Hard to say which one is the root cause splunk or systemd, probably some weird combination will cause this. Have you log your OS logs like messages, audit etc. into splunk? If yes, then you could try to find reason from those. Another what you could try is "dmesg -T" and journalctl and look if those gives you more hints. r. Ismo
Hi Have you already seen this https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk? It could give some hints how things are done on Splunk vs SQL. BUT you shouldn't foll... See more...
Hi Have you already seen this https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/SQLtoSplunk? It could give some hints how things are done on Splunk vs SQL. BUT you shouldn't follow this too much as how Splunk is working is totally different than in SQL. I suppose that there are many conf presentations which could help you to better understand how to work with Splunk. Some other good source of work with Splunk are: https://conf.splunk.com/watch/conf-online.html?locale=watch#/ https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/app/catalog/search?searchText=search%20beginner&selectedTab=LEARNINGEVENT&filter=%7B%22LEARNINGEVENTTYPEFACET%22:%7B%22label%22:null,%22values%22:%20%5B%7B%22facetValueId%22:%221%22,%22facetValueLabel%22:null%7D%5D%7D%7D https://docs.splunk.com/Documentation/Splunk/latest/Search/GetstartedwithSearch https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial https://www.youtube.com/results?search_query=splunk+bsides r. Ismo
Thanks fort the tips. As a workaround i made an override for the Splunk service, to receive a timeout after 4 minutes in stead of the default 6. We run it as root user, but concerning the sudoers... See more...
Thanks fort the tips. As a workaround i made an override for the Splunk service, to receive a timeout after 4 minutes in stead of the default 6. We run it as root user, but concerning the sudoers file: this is something for me to investigate. Maybe it has something to do with rights, because other application on Linux do not have this behaviour
It would be helpful if you could suggest a way to overcome this if that's the case
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results some... See more...
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results sometimes are amazing. Anyway I still feel not comfortable and experience a lot of problems with syntax and rules of splunk language .  I am thinking about a page/tutorial/blog/youtube channel , something like Splunk for DBA - relational DBA! . To read about theory , rules and syntax commands with examples  like stats, join, append, timecharts and other who can manipulate with multiple table indexes their relations and aggregations. Of course this community is a mine of examples and recipes but maybe there is a place where such topics are described and explained in more affordable structured way.  any ideas , hints K.  
My guess is that your custom command is getting the first chunk and stopping? Or perhaps your script is being called multiple times and you are perhaps overwriting the csv?
I am not specifically using any class type i just read the data from my stdin and writting directly to my csv
this is my script, sorry i am new to splunk, so not aware what exactly you are asking   import csv import os import sys def exportcsv(): try: if len(sys.argv) < 3: sys.exit(1) release = sys.... See more...
this is my script, sorry i am new to splunk, so not aware what exactly you are asking   import csv import os import sys def exportcsv(): try: if len(sys.argv) < 3: sys.exit(1) release = sys.argv[1] foldername = sys.argv[2] os.makedirs(folder_path, exist_ok=True) input_data = sys.stdin.read() rows = input_data.strip().split('\n') header = rows[0].split(',') data = [row.split(',') for row in rows[1:]] filepath = os.path.join(folder_path, filename) print(f"Writing to file: {filepath}", file=sys.stderr) with open(filepath, 'w', newline='') as file: writer = csv.writer(file) writer.writerow(header) writer.writerows(data) print("Status") print("Export Successful to " + filepath) except Exception as e: print(f"Error: {e}", file=sys.stderr) sys.exit(1) if __name__ == "__main__": exportcsv()
That's odd one, never seen that, I've have installed many Splunk instances on RHEL/CentOS/Fedora (7/8), over the years and not the other flavours so much, and  with systemctl and not initd.  There... See more...
That's odd one, never seen that, I've have installed many Splunk instances on RHEL/CentOS/Fedora (7/8), over the years and not the other flavours so much, and  with systemctl and not initd.  There may be a parameter that could be changed, in the Splunkd.service file, example TimeoutStopSec=360 lower this perhaps, it's not something I've done or ever had to, and only do in a lab test server and see if that makes a difference) https://docs.splunk.com/Documentation/Splunk/9.2.2/Workloads/Configuresystemd#Configure_systemd_manually    Other areas to further troubleshoot/investigate Ensure the splunk user has the below: (Add to Wheel or sudoers) and see if that makes a difference Non-root users must have super user permissions to manually configure systemd on Linux. Non-root users must have super user permissions to run start, stop, and restart commands under systemd https://docs.splunk.com/Documentation/Splunk/9.2.2/Workloads/Configuresystemd#Configure_systemd_manually 
What class is your command (from your python source)?
Not sure I understand the requirement - do you want to remove the sourcetypes which have events every day? Please clarify
The custom command is to write the output of the normal query which it is preceding which will give me a table will nearly 1000-2000 rows to a csv file with a custom location and custom file name, ba... See more...
The custom command is to write the output of the normal query which it is preceding which will give me a table will nearly 1000-2000 rows to a csv file with a custom location and custom file name, basically my query will look something like the below one   index=your_index sourcetype=your_sourcetype | search your_search_conditions | lookup your_lookup_table OUTPUTNEW additional_fields | eval new_field = if(isnull(old_field), "default_value", old_field) | table required_fields | exportcsv {file_path} {filename} in which exportcsv is my custom command and my commands.conf file looks like below   [exportcsv] filename = exportcsv.py
I dont think its a chunked data
That worked, one last thing, how do I display only specific sourcetype out of (A B C D E) for where event for each day?
Thank you for the response, and it was indeed a Firewall Rule issue!  I also had to ensure these permissions were granted on the Azure side.... SecurityIncident.Read.all/SecurityIncident.ReadWrite.... See more...
Thank you for the response, and it was indeed a Firewall Rule issue!  I also had to ensure these permissions were granted on the Azure side.... SecurityIncident.Read.all/SecurityIncident.ReadWrite.all  Incident.Read.All/Incident.ReadWrite.All
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine ... See more...
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine in English while keeping the machine's language set to French ?
Hi,   how to add "read more" link options for a table field values if its more than 50 characters in a splunk classic dashboard?
What custom command type are you using? Are you accepting chunked data?