All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ankitarath2011 Please have a look  https://www.splunk.com/en_us/blog/tips-and-tricks/collecting-docker-logs-and-stats-with-splunk.html?locale=en_us  https://www.tekstream.com/blog/containerization... See more...
@ankitarath2011 Please have a look  https://www.splunk.com/en_us/blog/tips-and-tricks/collecting-docker-logs-and-stats-with-splunk.html?locale=en_us  https://www.tekstream.com/blog/containerization-and-splunk-how-docker-and-splunk-work-together/ 
Hi @Srinath.S, I found this AppD Docs page: https://docs.appdynamics.com/appd/24.x/24.7/en/cisco-appdynamics-essentials/dashboards-and-reports Let me know if it helps with your question. 
Many thanks! I was troubleshooting why Splunk was not reading out the Security event log. After adding "NT Service\SplunkForwarder" to the "Event Log Readers" group, it finally works.
I'm trying to call the nslookupsearch custom command. All that does is an nslookup for an IP or computer name. But I'm trying to use it in a search because some of the data we get ingested doesn't co... See more...
I'm trying to call the nslookupsearch custom command. All that does is an nslookup for an IP or computer name. But I'm trying to use it in a search because some of the data we get ingested doesn't contain the information we need, so we implemented the custom command to be able to nslookup and populate a table with the data retrieved from the nslookupsearch. 
You could try something along these lines | makeresults format=csv data="index,1-Aug,8-Aug,15-Aug,22-Aug,29-Aug index1,5.76,5.528,5.645,7.666,6.783 index2,0.017,0.023,0.036,0.033,14.985 index3,2.333... See more...
You could try something along these lines | makeresults format=csv data="index,1-Aug,8-Aug,15-Aug,22-Aug,29-Aug index1,5.76,5.528,5.645,7.666,6.783 index2,0.017,0.023,0.036,0.033,14.985 index3,2.333,2.257,2.301,2.571,0.971 index4,2.235,1.649,2.01,2.339,2.336 index5,19.114,14.179,14.174,18.46,19.948" ``` the lines above simulate your data (without the calculations) ``` | untable index date size | eval date=strptime(date."-2024","%d-%b-%Y") | fieldformat date=strftime(date,"%F") | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size] | fields - date size relative_size | stats values(*) as * by index
Hi @ferdousfahim , I usually use this transformations at search time, but to apply them on Forwarders, you have to use INDEXED_EXTRACTIONS=CSV in props.conf, for more infos see at https://docs.splun... See more...
Hi @ferdousfahim , I usually use this transformations at search time, but to apply them on Forwarders, you have to use INDEXED_EXTRACTIONS=CSV in props.conf, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Extractfieldsfromfileswithstructureddata Ciao. Giuseppe
Hi @MoeTaher , please try something like this: index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields -count | appemd [ | inputlookup complianc... See more...
Hi @MoeTaher , please try something like this: index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields -count | appemd [ | inputlookup compliance.csv | fields Solution Status ] | stats first(Status) AS Status BY Solution | outputlookup compliance.csv Ciao. Giuseppe
Thanks, it worked! All I have to do is convert it to a percentage and we're all good to go. I'll pass along the karma.
Hi @MK3 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and t... See more...
I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and the problem disappeared. The rest was already pretty fast. I found that there was a persistent queue enabled for Linux input/source in the "Alway On" mode. The persistent queue was not turned on for Windows Input/source. Windows logs were OK all the time. After turning it off for Linux data, the problem disappeared. I don't understand why the persistent queue behaves this way, but I don't have time to investigate further. Maybe it's a Cribl bug or a misunderstanding of functionality. The input queue is not required in the project, so I can leave it off. For me, it's currently resolved Thank you all for your help and your time
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change ... See more...
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change 15-Aug Aug 15 Change 22-Aug Aug 22 change 29-Aug Aug 29 change index1 5.76 5.528 96% 5.645 102% 7.666 136% 6.783 88% index2 0.017 0.023 135% 0.036 157% 0.033 92% 14.985 45409% index3 2.333 2.257 97% 2.301 102% 2.571 112% 0.971 38% index4 2.235 1.649 74% 2.01 122% 2.339 116% 2.336 100% index5 19.114 14.179 74% 14.174 100% 18.46 130% 19.948 108% I have a search that returns the values without the change calculations | loadjob savedsearch="me@email.com:splunk_instance_monitoring:30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | sort index | addcoltotals label=Totals labelfield=index If the headers were something like "week 1" "week 2" I can get what I want, but with date headers that change very time, I've tried using foreach to iterate through and caclulate the changes from one column to the next but haven't been able to come up with the right solution.  Can anyone help?
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in th... See more...
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in the _internal or _audit indexes.  
Thanks @ITWhisperer for your suggestion.   I was able to do produce the requested data via this command.  
Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields succ... See more...
Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields success_rate
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing).... See more...
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing). How do I connect to my Postgres database installed on my PC to send/store this data? DB connect is not supported for my system (deprecated / sunset) Thanks
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any log... See more...
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any logs generating in a specific index,  For Example my search query will be : index=EDR | stats count | eval status=if((count > "0"),"Compliant","Not Compliant") | fields -count Results that i should have: status Compliant   I have a lookup table called compliance.csv and i need to update the status from "Not Compliant" to "Compliant".  Solution Status EDR Not Compliant DLP Not Compliant   how can i utilize outputlookup command to update the table not overwrite or append.     
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are su... See more...
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are successful events. So, I'm trying to get a percentage of the successful events over the total events. Ths is the query I built but when I run the search success rate comes back with no percentage value and I know there's 338/3190 successful events. Any help would go along way I've been struggling I feel like my SPL is getting better but man this one has me scratching my head. | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total | appendpipe [| inputlookup fm4143_3d.csv | where isnull(ERROR_MESSAGE) | stats count as success] | eval success_rate = ((success/total)*100) | fields success_rate  
Thank you Giuseppe, that was exactly what I was looking to achieve. 
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd o... See more...
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk. could be implemented?
+1