All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What class is your command (from your python source)?
Not sure I understand the requirement - do you want to remove the sourcetypes which have events every day? Please clarify
The custom command is to write the output of the normal query which it is preceding which will give me a table will nearly 1000-2000 rows to a csv file with a custom location and custom file name, ba... See more...
The custom command is to write the output of the normal query which it is preceding which will give me a table will nearly 1000-2000 rows to a csv file with a custom location and custom file name, basically my query will look something like the below one   index=your_index sourcetype=your_sourcetype | search your_search_conditions | lookup your_lookup_table OUTPUTNEW additional_fields | eval new_field = if(isnull(old_field), "default_value", old_field) | table required_fields | exportcsv {file_path} {filename} in which exportcsv is my custom command and my commands.conf file looks like below   [exportcsv] filename = exportcsv.py
I dont think its a chunked data
That worked, one last thing, how do I display only specific sourcetype out of (A B C D E) for where event for each day?
Thank you for the response, and it was indeed a Firewall Rule issue!  I also had to ensure these permissions were granted on the Azure side.... SecurityIncident.Read.all/SecurityIncident.ReadWrite.... See more...
Thank you for the response, and it was indeed a Firewall Rule issue!  I also had to ensure these permissions were granted on the Azure side.... SecurityIncident.Read.all/SecurityIncident.ReadWrite.all  Incident.Read.All/Incident.ReadWrite.All
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine ... See more...
Hello,  I want to collect logs from a machine that is set to French. Consequently, the logs are generated in French, making parsing them difficult. Is it possible to collect logs from the machine in English while keeping the machine's language set to French ?
Hi,   how to add "read more" link options for a table field values if its more than 50 characters in a splunk classic dashboard?
What custom command type are you using? Are you accepting chunked data?
I have created a custom search command and i am doing a pipeline to a another spl query with my custom query for small size data's its working fine but when the data produced by the preceding query m... See more...
I have created a custom search command and i am doing a pipeline to a another spl query with my custom query for small size data's its working fine but when the data produced by the preceding query my custom query starts running and giving me incomplete data and i have only mentioned filename attribute in the commands.conf file of my custom  command will this be the reason  
Hi I'm not sure if you found some useful from this presentation https://www.youtube.com/watch?v=1yEhbKXRFMg ? r. Ismo
Hi as @gcusello said there are issues with file permissions. You should check that those files are owned by your splunk user (usually splunk). Those can be changed e.g. if someone has restarted spl... See more...
Hi as @gcusello said there are issues with file permissions. You should check that those files are owned by your splunk user (usually splunk). Those can be changed e.g. if someone has restarted splunk as root user etc. One other option is that your file system has remounted as RO due to some OS/storage level issue. Check also this and fix if needed. r. Ismo
You propably used raw endpoint on HEC?
I haven't been able to look into this as much as I'd like, however over the past 2 weeks this has randomly worked couple of times - no errors and no issues. I still don't understand how it can compla... See more...
I haven't been able to look into this as much as I'd like, however over the past 2 weeks this has randomly worked couple of times - no errors and no issues. I still don't understand how it can complain about not having the right permissions and then suddenly work well the very next day to only again give the errors 2 days later.... 
Hi I said that for working dev environment you should have at least 4vCPU and 8GB memory. But even more important is that your disks can perform at least 800IOPS preferred is 1200+ IOPS. This should... See more...
Hi I said that for working dev environment you should have at least 4vCPU and 8GB memory. But even more important is that your disks can perform at least 800IOPS preferred is 1200+ IOPS. This should apply both Splunk binary/var and splunk indexer data disks. One way to test this is use Bonnie++ or some similar tool. Of course if you see that information from your infra tools it's enough. r. Ismo
This indicates  that the CPU is spending a significant amount of time waiting for I/O  (typically disk) as your ingesting/parsing data/searching, so with Splunk you need to size it sufficiently, othe... See more...
This indicates  that the CPU is spending a significant amount of time waiting for I/O  (typically disk) as your ingesting/parsing data/searching, so with Splunk you need to size it sufficiently, otherwise you will get those messages, remember Splunk is a workhorse and needs resources:   Have a look at the below to posts, I recently had replied to around iowait   https://community.splunk.com/t5/Splunk-Enterprise/IOWAIT-Mystery-What-is-it-Is-it-important/m-p/690256#M19597    https://community.splunk.com/t5/Splunk-Enterprise/Splunk-Enterprise-how-does-it-detect-IOWAIT-warning-or-error/m-p/690444#M19605    Go through these questions https://docs.splunk.com/Documentation/Splunk/9.2.1/Capacity/Performancechecklist    Look at the guide in terms of performance recommendations  https://docs.splunk.com/Documentation/Splunk/9.2.1/Capacity/Summaryofperformancerecommendations  In summary I think you will need to bump up your specifications, but for a dev environment, you can ignore those messages, unless it's starts to crawl and become unbearable. 
You can do it by overwriting the field, or just create a new field or use the rangemap, there are so many ways to do it - you can also use fieldformat, which will display a value, but retain the orig... See more...
You can do it by overwriting the field, or just create a new field or use the rangemap, there are so many ways to do it - you can also use fieldformat, which will display a value, but retain the original - see this example how after the stats, the severity retains its numerical value and also the stats will still split by the different numerical values. | makeresults count=100 | eval severity=random() % 5 + 1 | rangemap field=severity low=1-3 medium=4-4 high=5-5 | fieldformat severity=case(severity<=3, "low", severity=4, "medium", severity=5, "high") | stats count by severity | eval x=severity
Hi @hazem , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @hazem , in my opinion it should run as is, but to change the identity password is a very easy step. Ciao. Giuseppe
Hi at all, I have a new doubt about the sequence of activities during indextime. I have a data flow, arriving from HEC on an HF that I need to elaborate it because these data arrive from a concentr... See more...
Hi at all, I have a new doubt about the sequence of activities during indextime. I have a data flow, arriving from HEC on an HF that I need to elaborate it because these data arrive from a concentrator and are relative to many different data flows (linux, oracle, etc...), so I have to assign the correct sourcetype to these data and I have to elaborate logs because they are modified by securelog: the original logs are inserted in a field of json adding some metadata. I configured the following flow: in props.conf: [source::http:logstash*] TRANSFORMS-000 = global_set_metadata TRANSFORMS-001 = set_sourcetype_by_regex TRANSFORMS-001 = set_index_by_sourcetype in transforms.conf: [global_set_metadata] INGEST_EVAL = host := coalesce(json_extract(_raw, "host.name"), json_extract(_raw, "host.hostname")), relay_hostname := json_extract(_raw, "hub"), source := "http:logstash".coalesce("::".json_extract(_raw, "log.file.path"), "") [set_sourcetype_by_regex] INGEST_EVAL = sourcetype := case(searchmatch("/var/log/audit/audit.log"), "linux_audit", true(), "logstash") [set_index_by_sourcetype] INGEST_EVAL = index:=case(sourcetype=linux, "index_linux", sourcetype=logstash, "index_logstash") in which: the first transformation extract (using INGEST_EVAL) metadata as host, source and relay_hostname (the concentrator from which the logs arrive), the second one assign the correct sourcetype based on a regex. the third one assign the correct index based on sourcetype and usig INGEST_EVAL to avoid to re-run a regex, the first two transformations are correctly executed, but the third doesn't use the sourcetype assigned by the second one. I also tried a different approach using CLONE_SOURCETYPE in the second one (instead of INGEST_EVAL) and it runs, but I'm verifying if the above flow can run because it's more linear and should be less heavy for the system. Where could I search the issue? is there something wrong in the activity flow? Thank you to all. Ciao. Giuseppe