All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not ... See more...
Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not taken into account by the regex. regex: ^.+?devid=\"?F(?:G|W|\dK).+?(?:\s |\,|\,\s)type=\"?(traffic|utm|event|anomaly) From the stanza: [force_sourcetype_fortigate] We updated it on our side, but is this behavior normal? Thanks in advance, Best regards.
It seems there might be a misunderstanding. I'd prefer to steer clear of utilizing the makeresults command. My aim is to pinpoint a particular index (application) within a specific environment and ga... See more...
It seems there might be a misunderstanding. I'd prefer to steer clear of utilizing the makeresults command. My aim is to pinpoint a particular index (application) within a specific environment and gather all events categorized as errors or warnings. Ideally, I'd like these events consolidated into a single location for ease of review. However, not all errors or warnings are pertinent to my needs. Therefore, I'd like to implement a filter mechanism where I can selectively exclude events by inputting a portion of the log message body into a text box. This text input would then be added to a multi-select feature, enabling me to filter out undesired events effectively. I'd then use a token of a multi-select input and use that token in queries I already have.... See the Dashboard I provided you  Thank you in advance
yeah sure i have a lookup called panels.csv , Panels Critical severity vulnerabilities High severity vulnerabilities Vulnerabilities solved Local virtual machines Outdated opera... See more...
yeah sure i have a lookup called panels.csv , Panels Critical severity vulnerabilities High severity vulnerabilities Vulnerabilities solved Local virtual machines Outdated operation systems - Server Outdated operating systems - Endpoint Outdated operating systems - Unknown Defender enrollment status Clients with old Defender patterns Systems not found in patch management database Clients missing critical updates Servers with blacklisted Software Clients with blacklisted Software Total Installed blacklisted Software Blacklisted Software Exceptions i want to display them horizontally , which i was using your given search , but the result is coming in this pattern Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved i want to display it like this but want to have sections of each content just like table
Hi @Splunkerninja, do you want to calcuate the icense consuption or the number of events per index and per day? In the first case see at [Settings > License > License Consuption past 60 days > by I... See more...
Hi @Splunkerninja, do you want to calcuate the icense consuption or the number of events per index and per day? In the first case see at [Settings > License > License Consuption past 60 days > by Index], or run this: index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by idx fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] In the second case, you could try something ike this: index=* | bin span=1d _time | chart count OVER index BY _time Ciao. Giuseppe
so what did you try and what gave you the wrong results This is the basic search index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* st=* | stats sum(b) as bytes by idx | eval... See more...
so what did you try and what gave you the wrong results This is the basic search index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* st=* | stats sum(b) as bytes by idx | eval gb=round(bytes/1024/1024/1024,3)  Run that over the time range you want
There is a table visualisation in Splunk and when you run that command you are getting a table visualisation. Perhaps you can describe your data better, because you are clearly looking for something... See more...
There is a table visualisation in Splunk and when you run that command you are getting a table visualisation. Perhaps you can describe your data better, because you are clearly looking for something different than just panels a b c. Your post describing this  Panels Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved doesn't actually tell me anything useful - can you describe your lookup data, what it contains and give a better description of how you want the data to look in your table.  
Hi @rphillips_splk , @hrawat  It's great to hear that it will be finally fixed, but when will you release those fixed versions? I don't find those tags on docker hub. Also, why didn't Splunk contai... See more...
Hi @rphillips_splk , @hrawat  It's great to hear that it will be finally fixed, but when will you release those fixed versions? I don't find those tags on docker hub. Also, why didn't Splunk containers crash when this kind of failure happen? We are running the splunk/splunk images (as heavy forwarders) on K8S and we only noticed the issue when we saw that the network thoughput was low on a pod. K8S didn't restart the pod automatically because it didn't crash. The container stayed there as a zombie and didn't do any forwarding. Thank you! Regards, DG
Hello Team, I would like to get clarified whether there is a possibility of ingesting application prometheus metrics onto Splunk Enterprise through Universal or Heavy Forwarders. Currently we are a... See more...
Hello Team, I would like to get clarified whether there is a possibility of ingesting application prometheus metrics onto Splunk Enterprise through Universal or Heavy Forwarders. Currently we are able to ingest Prometheus metrics through Splunk Otel Collector & Splunk HEC onto splunk Enterprise. Is there a similar solution using Forwarders? Kindly please suggest. Additionally can we also confirm if Splunk Otel Collector + Fluentd agent is available only as open-source agents?
Hello, by same error i mean that after changing the stanza config in distsearch.conf and restarting the service on the sh., there was the Invalid key message on btool but with different value
As your current inputs is set for scraping all the logs from the folder D:\logs and then you are sending various events from the those logs to null and now you want to be more selective in terms of o... See more...
As your current inputs is set for scraping all the logs from the folder D:\logs and then you are sending various events from the those logs to null and now you want to be more selective in terms of one log file that you want for info level information and still keep the others from sending some type of events, this becomes a little tricky without testing and having a tinker. Some options that may work: Option 1 You might want to move that log (jkl.txt) to another folder or a sub folder and monitor it separately with another monitor, props and transforms so you can control it, this would leave the other's where they are and you can ingest this one now and filter on it as well. Option 2 Rework your current props and transforms - you may be able to set by source in props, do this for all your other logs and send them to null, either way this all needs some level config and testing out.   [source::...my_otherlog.txt] TRANSFORMS-my_otherlog = my_otherlog_file_null
Hello. I have tried different combination of replicationDenyList stanza definition, in all cases it did not work. with quotes, "apps\TA-microsoft-graph-security-add-on-for-splunk\bin\...", without ... See more...
Hello. I have tried different combination of replicationDenyList stanza definition, in all cases it did not work. with quotes, "apps\TA-microsoft-graph-security-add-on-for-splunk\bin\...", without quotes apps\TA-microsoft-graph-security-add-on-for-splunk\bin\... , with * "apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*", with full path D:\Splunk Search Head\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*, and combinations of them. But nothing, I always got the error:  Invalid key in stanza [replicationDenyList] in D:\Splunk Search Head\etc\system\local\distsearch.conf, line 29: MSbin (value: apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*). Do you have a working example of this stanza? Thanks for your help.
Hi , I came across many queries to calculate daily ingest per index for last 7 days but I am not getting the expected results.   Can you please guide me with the query to calculate the daily ingest... See more...
Hi , I came across many queries to calculate daily ingest per index for last 7 days but I am not getting the expected results.   Can you please guide me with the query to calculate the daily ingest per index in GB for last 7 days?
Try cutting it down so that it remains valid and representative and then paste it here.
You have not shown anything that indicates that the search has the value you are seeking on the first row of your results. Please share your search and follow @bowesmana's suggestion about which toke... See more...
You have not shown anything that indicates that the search has the value you are seeking on the first row of your results. Please share your search and follow @bowesmana's suggestion about which token to use to retrieve the results.
the result coming is Panels Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities... See more...
the result coming is Panels Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved but I want all the result  in different section of table
is there table virualization in splunk
Hi danspav, thank you so much, the query took around 300 sec. on around 10 indexes, 4TB db size and returns what i'm looking for, perfect!
You can also add this on the end of that previous post which will make the column name the value of the panel and the value of the column=1 | foreach row* [ eval {<<FIELD>>}=1 ] | fields - row*
You can do this | inputlookup panels.csv | transpose 0 what do you want the column headings to be? That will give you columns called row 1, row 2, row 3 and so on with the values found.  
Building on @gcusello approach, it can be done more efficiently by not using mvexpand and just filtering out the ones that do not match the max value. index=abc host IN () | eval col=_time."|".respo... See more...
Building on @gcusello approach, it can be done more efficiently by not using mvexpand and just filtering out the ones that do not match the max value. index=abc host IN () | eval col=_time."|".response_time | stats max(response_time) AS max_response_time values(col) AS col BY URL | eval times=mvmap(col, if(match(col, "\|".max_response_time."$"), mvindex(split(col, "|"), 0), null())) | fields URL max_response_time times | eval times=strftime(times, "%F %T.%Q") | rename max_response_time as "Maximum Response Time" | sort - "Maximum Response Time" Note that if you have LOTS of values and lots of URLs you may get a spike in memory usage retaining all the values. Note this also handles the situation where the max response time occurs in more than one time. You can also do this with eventstats index=abc host IN () | fields _time response_time URL | eventstats max(response_time) AS max_response_time by URL | where response_time=max_response_time | stats values(max_response_time) AS "Maximum Response Time" values(_time) as times BY URL | eval times=strftime(times, "%F %T.%Q") | sort - "Maximum Response Time" Check which will perform better with your data - eventstats can be slow if crunching lots of data. You can see an example of how this works using either of the techniques above by replacing index=abc... with this, which will give you some simulated data | makeresults count=1000 | streamstats c | eval _time=now() - c | eval response_time=random() % 1000 | eval URL=mvindex(split("URL1,URL2,URL3,URL4",","), random() % 4) | fields - c