All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Thank you, that indeed solved my issue. I also noticed that there are some screenshots in your documentation that are not up to date. It would be worth updating it for other users. Thank... See more...
Hello Thank you, that indeed solved my issue. I also noticed that there are some screenshots in your documentation that are not up to date. It would be worth updating it for other users. Thanks again for your response!
@richgalloway Hi, Instead of using SEDCMD-rm can we use like this ? REGEX = cs4=(\w+) cs4Label=([\s]+) FORMAT = $2::$1 eg: cs4= will smith cs4Label=suser display name Thanks..
Hi folks I've a KVstore containing the following values: hostname, IP address. This KVstore is updated every hour to ensure that the host name and IP address always match. The KVStore is updated us... See more...
Hi folks I've a KVstore containing the following values: hostname, IP address. This KVstore is updated every hour to ensure that the host name and IP address always match. The KVStore is updated using saved search. The reason is that the environment (IP … hostname relationship) changes very often (DHCP) I was thinking about the automatic lookup when logs containing an IP address are ingested to enrich them with the hostname corresponding at the time of ingestion and not the one corresponding during the next search? Unfortunately ingest-time lookup is not available in Splunk Cloud Platform and this functionality is also only for CSV file lookup. And also the solution with intermediate forwarder is not suitable for me. Any advice ?
I had a missing data from a certain date and time range. How would i re-ingest the data into splunk from a UF.   Below is the inputs.conf [monitor:///app/java/servers/app/log/app.log.2023-11-12... See more...
I had a missing data from a certain date and time range. How would i re-ingest the data into splunk from a UF.   Below is the inputs.conf [monitor:///app/java/servers/app/log/app.log.2023-11-12] index = app_logs ignoreOlderThan = 10d disabled = false sourcetype = javalogs Its missing data from Nov-11 00:05 till Nov-12 13:00 so how would i just reinject the data only for that certain data/time period. It just one log file for a day although we have some events so how would i regest only the missing data for the time period and please let me know the config.    
It sounds to me like when data is aggregated on the one server the original host information is lost. Would it be possible for each RHEL7 host to forward their logs directly to Splunk?  That would p... See more...
It sounds to me like when data is aggregated on the one server the original host information is lost. Would it be possible for each RHEL7 host to forward their logs directly to Splunk?  That would preserve the host information.
Hello,  I have the below Splunk search and I want to put the results into a line graph so I can compare all of the disk instances e.g. C, D , F over a period of time.  The search that I am using ... See more...
Hello,  I have the below Splunk search and I want to put the results into a line graph so I can compare all of the disk instances e.g. C, D , F over a period of time.  The search that I am using is:  index=windows_perfmon eventtype="perfmon_windows" Host="XXXX" object="LogicalDisk" counter="% Disk Write Time" instance="*" AND NOT instance=_Total AND NOT instance=Hard* | stats latest(Value) as Value by _time, instance | eval Value=round(Value, 2) Any advise as I would like to create this in a line graph visualisation with the instances on different lines so you can do trend analysis on the Disk Write Time.   The results I am getting are:  _time instance value 2023-11-15 15:28:02 C: 2.83 2023-11-15 15:28:02 D : 0.01 2023-11-15 15:33:02 C: 4.10 2023-11-15 15:33:02 0.01 2023-11-15 15:38:02 C: 2.59 2023-11-15 15:38:02 0.01 2023-11-15 15:43:02 C: 1.98 2023-11-15 15:43:02 0.01 2023-11-15 15:48:02 C: 2.81 2023-11-15 15:48:02 0.01 2023-11-15 15:53:02 C: 2.51 2023-11-15 15:53:02 0.01
The highest value for frozenTimePeriodInSecs is 4294967295 (136 years). There are a few size limit settings.  Which ones to use depend on if you use volumes or SmartStore.  Check out maxTotalDataSiz... See more...
The highest value for frozenTimePeriodInSecs is 4294967295 (136 years). There are a few size limit settings.  Which ones to use depend on if you use volumes or SmartStore.  Check out maxTotalDataSizeMB, maxGlobalRawDataSizeMB, maxGlobalDataSizeMB, homePath.maxDataSizeMB, and coldPath.maxDataSizeMB, all of which have the same maximum value (4294967295).
Thanks @bowesmana  I appreciate your help !!!!
Hi @doadams85, after you installe the Splunk Universla forwarder on the target host did you: configure the Indexer to receive logs from forwarders (by default on port 9997) configure your UF to s... See more...
Hi @doadams85, after you installe the Splunk Universla forwarder on the target host did you: configure the Indexer to receive logs from forwarders (by default on port 9997) configure your UF to send logs to that Indexer (outputs.conf)? install the Splunk_TA-for Linux? enable the input stanzas? Ciao. Giuseppe
Hi there,  I have multiple panels added in a dashboard and I would like to reduce the font size of the entire dashboard contents - the dashboard is being created using classic dash, not the studio... See more...
Hi there,  I have multiple panels added in a dashboard and I would like to reduce the font size of the entire dashboard contents - the dashboard is being created using classic dash, not the studio.  Is there a possibility to achieve this?  Ty!
KV_MODE=auto means Splunk will automatically extract fields when it finds data in key=value format.  KV_MODE=none means Splunk disables search-time extraction of the host, source, and sourcetype fie... See more...
KV_MODE=auto means Splunk will automatically extract fields when it finds data in key=value format.  KV_MODE=none means Splunk disables search-time extraction of the host, source, and sourcetype fields.  This can be useful if you extract these fields yourself. The add-on builder must be used in a local, non-clustered instance.  It should work on Windows, but I've not done so myself.  Apps built on a Windows platform will not pass Splunk Cloud app vetting because Windows does not set the file permissions correctly.
Hi All - Any ideas on what I posted?
Perhaps it is the data. Can you share some events which aren't being matched correctly?
its less events only but field value not getting
Click on the > symbol on each line to see more information about the failure.  Then there will be a button you can click to get specific information on how to remediate the problem.  Often, that invo... See more...
Click on the > symbol on each line to see more information about the failure.  Then there will be a button you can click to get specific information on how to remediate the problem.  Often, that involves installing a newer version of an app.  Other times, you simply need to add 'version="1.1"' to the first line of each dashboard source.  Some fixes require python code changes and Splunk will offer suggestions for them.
The null queue is for whole events, not individual fields.  One can remove fields using SEDCMD in props.conf. SEDCMD-rm_XYZData = s/XYZData\>.*\<\/XYZData\>//  
Hi @smithy001, the number of volumes isn't so relevant: make a complete Capacity Plan. Ciao. Giuseppe
Hi @MrJohn230 , at first, if possible try to avoid to use join command! I understand that all of us arrive from SQL, but Splunk isn't a database so join command should be avoided all the times it's... See more...
Hi @MrJohn230 , at first, if possible try to avoid to use join command! I understand that all of us arrive from SQL, but Splunk isn't a database so join command should be avoided all the times it's possible and replaced e.g. with the stats command, because it's a very slow and resource eater command. e.g. try something like this (obviously I cannot check it): index=customer ((name IN (gate-green, gate-blue) msg="*First time: *") OR name IN (cust-blue, cust-green) msg="*COMPLETED *") | rex field=msg "First time: (?<UserId>\d+)" | rex field=msg "Message\|[^\t\{]*(?<json>{[^\t]+})" | spath input=json path=infoId output=UserId | eval status=if(name IN (gate-green, gate-blue) AND msg="*First time: *","FirstRequest","Completed") | stats dc(status) AS status_count values(status) AS status BY UserId | eval status=if(status_count=2,"both",status) | table UserId status | search UserId IN (125,999,418,208) Then you can define if to maintain all the UserIds or only the ones with both the statuses. About your search, try to use quotes in the IN values. Ciao. Giuseppe
This was solved with the help of PS. On the Application API in AzureAD add the User.read.All of type Application to the configured permissions.   Remember to add all the users that needs to access... See more...
This was solved with the help of PS. On the Application API in AzureAD add the User.read.All of type Application to the configured permissions.   Remember to add all the users that needs to access Splunk to the Enterprise Application
    Hi @ITWhisperer  Based on the below  the raw events....I need to filter based on the attribute "suppliedMaterial"  and "version"- get the result of row and then add the columns of sqsSentCount... See more...
    Hi @ITWhisperer  Based on the below  the raw events....I need to filter based on the attribute "suppliedMaterial"  and "version"- get the result of row and then add the columns of sqsSentCount and dataNotFoundIdsCount similar to below objectType version dataNotFoundIdsCount sqsSentCount suppliedMaterial all 1 8 suppliedMaterial latest 3 9 Material all 3 11 Material latest 6 10               supplied material 1st event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"event":{"resource":"/v1/reprocess/id","path":"/support/v1/reprocess/id","httpMethod":"POST","pathParameters":null,"queryStringParameters":null,"body":"{\n \"objectType\": \"suppliedMaterial\",\n \"objectIds\": [\n \"569683\",\n \"564373er\",\n \"569129\"\n ],\n \"version\": \"all\"\n}","requestContext":{"requestId":"","authorizer":{"principalId":"","integrationLatency":0},"domainName":""}},"msg":"reprocess event","time":"2023-11-15T05:47:59.318Z","v":0} 2nd event- 
{"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"suppliedMaterial","dataNotFoundIds":["564373er"],"dataNotFoundIdsCount":1,"msg":"data not found for Ids","time":"2023-11-15T05:47:59.329Z","v":0} 3rd event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"suppliedMaterial","sqsSentCount":8,"msg":"sqs sent count","time":"2023-11-15T05:47:59.364Z","v":0} 4th event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"eventBody":{"objectType":"suppliedMaterial","objectIds":["569683","564373er","669179"],"version":"all"},"msg":"request body","time":"2023-11-15T05:47:59.318Z","v":0} 5 event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"event":{"resource":"/v1/reprocess/id","path":"/support/v1/reprocess/id","httpMethod":"POST","pathParameters":null,"queryStringParameters":null,"body":"{\n \"objectType\": \"suppliedMaterial\",\n \"objectIds\": [\n \"569683\",\n \"564373er\",\n \"669179\"\n ],\n \"version\": \"latest\"\n}","requestContext":{"requestId":"","authorizer":{"principalId":"","integrationLatency":0},"domainName":""}},"msg":"reprocess event","time":"2023-11-15T05:47:59.318Z","v":0} 6 event- 
{"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"suppliedMaterial","dataNotFoundIds":["564373er"],"dataNotFoundIdsCount":3,"msg":"data not found for Ids","time":"2023-11-15T05:47:59.329Z","v":0} 7 event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"suppliedMaterial","sqsSentCount":9,"msg":"sqs sent count","time":"2023-11-15T05:47:59.364Z","v":0} 8 event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"eventBody":{"objectType":"suppliedMaterial","objectIds":["569683","564373er","569129"],"version":"latest"},"msg":"request body","time":"2023-11-15T05:47:59.318Z","v":0} material 1st event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"event":{"resource":"/v1/reprocess/id","path":"/support/v1/reprocess/id","httpMethod":"POST","pathParameters":null,"queryStringParameters":null,"body":"{\n \"objectType\": \"material\",\n \"objectIds\": [\n \"569683\",\n \"564373er\",\n \"469196\"\n ],\n \"version\": \"all\"\n}","requestContext":{"requestId":"","authorizer":{"principalId":"","integrationLatency":0},"domainName":""}},"msg":"reprocess event","time":"2023-11-15T05:47:59.318Z","v":0} 2nd event- 
{"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"material","dataNotFoundIds":["564373er"],"dataNotFoundIdsCount":3,"msg":"data not found for Ids","time":"2023-11-15T05:47:59.329Z","v":0} 3rd event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"material","sqsSentCount":11,"msg":"sqs sent count","time":"2023-11-15T05:47:59.364Z","v":0} 4th event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"eventBody":{"objectType":"material","objectIds":["569683","564373er","569129"],"version":"all"},"msg":"request body","time":"2023-11-15T05:47:59.318Z","v":0} 5 event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"event":{"resource":"/v1/reprocess/id","path":"/support/v1/reprocess/id","httpMethod":"POST","pathParameters":null,"queryStringParameters":null,"body":"{\n \"objectType\": \"suppliedMaterial\",\n \"objectIds\": [\n \"569683\",\n \"564373er\",\n \"569129\"\n ],\n \"version\": \"latest\"\n}","requestContext":{"requestId":"","authorizer":{"principalId":"","integrationLatency":0},"domainName":""}},"msg":"reprocess event","time":"2023-11-15T05:47:59.318Z","v":0} 6 event- 
{"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"suppliedMaterial","dataNotFoundIds":["564373er"],"dataNotFoundIdsCount":6,"msg":"data not found for Ids","time":"2023-11-15T05:47:59.329Z","v":0} 7 event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"supportType":"reprocess","entity":"suppliedMaterial","sqsSentCount":10,"msg":"sqs sent count","time":"2023-11-15T05:47:59.364Z","v":0} 8event- {"name":"","awsRequestId":"1","hostname":","pid":8,"level":30,"eventBody":{"objectType":"suppliedMaterial","objectIds":["569683","564373er","569129"],"version":"latest"},"msg":"request body","time":"2023-11-15T05:47:59.318Z","v":0}