All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It would be helpful if you could suggest a way to overcome this if that's the case
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results some... See more...
I have been trying to create some analyzes in splunk for  a few week now. Sometimes I succeed, sometimes I fail. I appreciate a lot of help from community users - it helps a lot. And the results sometimes are amazing. Anyway I still feel not comfortable and experience a lot of problems with syntax and rules of splunk language .  I am thinking about a page/tutorial/blog/youtube channel , something like Splunk for DBA - relational DBA! . To read about theory , rules and syntax commands with examples  like stats, join, append, timecharts and other who can manipulate with multiple table indexes their relations and aggregations. Of course this community is a mine of examples and recipes but maybe there is a place where such topics are described and explained in more affordable structured way.  any ideas , hints K.  
My guess is that your custom command is getting the first chunk and stopping? Or perhaps your script is being called multiple times and you are perhaps overwriting the csv?
I am not specifically using any class type i just read the data from my stdin and writting directly to my csv
this is my script, sorry i am new to splunk, so not aware what exactly you are asking   import csv import os import sys def exportcsv(): try: if len(sys.argv) < 3: sys.exit(1) release = sys.... See more...
this is my script, sorry i am new to splunk, so not aware what exactly you are asking   import csv import os import sys def exportcsv(): try: if len(sys.argv) < 3: sys.exit(1) release = sys.argv[1] foldername = sys.argv[2] os.makedirs(folder_path, exist_ok=True) input_data = sys.stdin.read() rows = input_data.strip().split('\n') header = rows[0].split(',') data = [row.split(',') for row in rows[1:]] filepath = os.path.join(folder_path, filename) print(f"Writing to file: {filepath}", file=sys.stderr) with open(filepath, 'w', newline='') as file: writer = csv.writer(file) writer.writerow(header) writer.writerows(data) print("Status") print("Export Successful to " + filepath) except Exception as e: print(f"Error: {e}", file=sys.stderr) sys.exit(1) if __name__ == "__main__": exportcsv()
That's odd one, never seen that, I've have installed many Splunk instances on RHEL/CentOS/Fedora (7/8), over the years and not the other flavours so much, and  with systemctl and not initd.  There... See more...
That's odd one, never seen that, I've have installed many Splunk instances on RHEL/CentOS/Fedora (7/8), over the years and not the other flavours so much, and  with systemctl and not initd.  There may be a parameter that could be changed, in the Splunkd.service file, example TimeoutStopSec=360 lower this perhaps, it's not something I've done or ever had to, and only do in a lab test server and see if that makes a difference) https://docs.splunk.com/Documentation/Splunk/9.2.2/Workloads/Configuresystemd#Configure_systemd_manually    Other areas to further troubleshoot/investigate Ensure the splunk user has the below: (Add to Wheel or sudoers) and see if that makes a difference Non-root users must have super user permissions to manually configure systemd on Linux. Non-root users must have super user permissions to run start, stop, and restart commands under systemd https://docs.splunk.com/Documentation/Splunk/9.2.2/Workloads/Configuresystemd#Configure_systemd_manually 
What class is your command (from your python source)?
Not sure I understand the requirement - do you want to remove the sourcetypes which have events every day? Please clarify
The custom command is to write the output of the normal query which it is preceding which will give me a table will nearly 1000-2000 rows to a csv file with a custom location and custom file name, ba... See more...
The custom command is to write the output of the normal query which it is preceding which will give me a table will nearly 1000-2000 rows to a csv file with a custom location and custom file name, basically my query will look something like the below one   index=your_index sourcetype=your_sourcetype | search your_search_conditions | lookup your_lookup_table OUTPUTNEW additional_fields | eval new_field = if(isnull(old_field), "default_value", old_field) | table required_fields | exportcsv {file_path} {filename} in which exportcsv is my custom command and my commands.conf file looks like below   [exportcsv] filename = exportcsv.py
I dont think its a chunked data
That worked, one last thing, how do I display only specific sourcetype out of (A B C D E) for where event for each day?
Thank you for the response, and it was indeed a Firewall Rule issue!  I also had to ensure these permissions were granted on the Azure side.... SecurityIncident.Read.all/SecurityIncident.ReadWrite.... See more...
Thank you for the response, and it was indeed a Firewall Rule issue!  I also had to ensure these permissions were granted on the Azure side.... SecurityIncident.Read.all/SecurityIncident.ReadWrite.all  Incident.Read.All/Incident.ReadWrite.All
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine ... See more...
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine in English while keeping the machine's language set to French ?
Hi,   how to add "read more" link options for a table field values if its more than 50 characters in a splunk classic dashboard?
What custom command type are you using? Are you accepting chunked data?
I have created a custom search command and i am doing a pipeline to a another spl query with my custom query for small size data's its working fine but when the data produced by the preceding query m... See more...
I have created a custom search command and i am doing a pipeline to a another spl query with my custom query for small size data's its working fine but when the data produced by the preceding query my custom query starts running and giving me incomplete data and i have only mentioned filename attribute in the commands.conf file of my custom  command will this be the reason  
Hi I'm not sure if you found some useful from this presentation https://www.youtube.com/watch?v=1yEhbKXRFMg ? r. Ismo
Hi as @gcusello said there are issues with file permissions. You should check that those files are owned by your splunk user (usually splunk). Those can be changed e.g. if someone has restarted spl... See more...
Hi as @gcusello said there are issues with file permissions. You should check that those files are owned by your splunk user (usually splunk). Those can be changed e.g. if someone has restarted splunk as root user etc. One other option is that your file system has remounted as RO due to some OS/storage level issue. Check also this and fix if needed. r. Ismo
You propably used raw endpoint on HEC?
I haven't been able to look into this as much as I'd like, however over the past 2 weeks this has randomly worked couple of times - no errors and no issues. I still don't understand how it can compla... See more...
I haven't been able to look into this as much as I'd like, however over the past 2 weeks this has randomly worked couple of times - no errors and no issues. I still don't understand how it can complain about not having the right permissions and then suddenly work well the very next day to only again give the errors 2 days later....