All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Splunkerninja , does the search in [Settings > License > License Consuption > last 60 days > divided by index] run? I only copied this search. Ciao. Giuseppe
The first query is not giving me any results. Even i replaced the macro with actualy query it gives zero result.   I basically want the total of daily ingest of each index over 7 days index=_inter... See more...
The first query is not giving me any results. Even i replaced the macro with actualy query it gives zero result.   I basically want the total of daily ingest of each index over 7 days index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by idx fixedrange=false | join type=outer _time [ search index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [ eval <>=round('<>'/1024/1024/1024, 3)]
Sometimes you do encounter some extractions are not working as expected and  sometimes logs change, so if encountered, then apply the fix to local config as you have done  otherwise it will get overw... See more...
Sometimes you do encounter some extractions are not working as expected and  sometimes logs change, so if encountered, then apply the fix to local config as you have done  otherwise it will get overwritten with a new version of the TA. AS this is developed by FortiGate, there may be an email you can send them, so they can fix it for the next version, look for the details via the Splunk base or documentation.
Hi all, we collect some json data from a logfile with a universal forwarder. Most times the events were indexed correctly with already extracted fields, but for a few events the fields are not auto... See more...
Hi all, we collect some json data from a logfile with a universal forwarder. Most times the events were indexed correctly with already extracted fields, but for a few events the fields are not automatically extracted.  If i reindex the same events the indexed extraction is also fine. I did not find any entries in splunkd.log that it is not working. Following props.conf is on the Universal fowarder and Heavy Forwarder (maybe someone could explain which parameter is needed on UF and which on HF): [svbz_swapp_task_activity_log] CHARSET=UTF-8 SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true INDEXED_EXTRACTIONS=json KV_MODE=none category=Custom disabled=false pulldown_type=true TIMESTAMP_FIELDS=date_millis TIME_FORMAT=%s%3N following props.conf is on the Searchhead: [svbz_swapp_task_activity_log] KV_MODE=none The first time when it was indexed automatically it looks like:   When i reindex the same Event again to another index it looks fine: In last 7 days it was working correctly for about 32000 event but for 168 events the automatic field extraction was not working. Here is also the example event: {"task_id": 100562, "date_millis": 1713475816310, "year": 2024, "month": 4, "day": 18, "hour": 23, "minute": 30, "second": 16, "action": "start", "step_name": "XXX", "status": "started", "username": "system", "organization": "XXX", "workflow_id": 14909, "workflow_scheme_name": "XXX", "workflow_status": "started", "workflow_date_started": 1713332220965, "workflow_date_finished": null, "escalation_level": 0, "entry_attribute_1": 1711753200000, "entry_attribute_2": "manual_upload", "entry_attribute_3": 226027, "entry_attribute_4": null, "entry_attribute_5": null} Does someone have an idea why it is sometimes working and sometimes not? When i would now change the KV_Mode on search head the fields are shown correctly for these 168 events but for all others the fields are extracted twice. Using spath with same names would extract it only once. What is the best workaround for already indexed events to get proper search results. Thanks and kind regards Kathrin
Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not ... See more...
Hello, We are encountering a problem with the parsing on the fortigate add-on. It does not recognize the devid of our equipment. This fortigate having a serial number starting with FD, it was not taken into account by the regex. regex: ^.+?devid=\"?F(?:G|W|\dK).+?(?:\s |\,|\,\s)type=\"?(traffic|utm|event|anomaly) From the stanza: [force_sourcetype_fortigate] We updated it on our side, but is this behavior normal? Thanks in advance, Best regards.
It seems there might be a misunderstanding. I'd prefer to steer clear of utilizing the makeresults command. My aim is to pinpoint a particular index (application) within a specific environment and ga... See more...
It seems there might be a misunderstanding. I'd prefer to steer clear of utilizing the makeresults command. My aim is to pinpoint a particular index (application) within a specific environment and gather all events categorized as errors or warnings. Ideally, I'd like these events consolidated into a single location for ease of review. However, not all errors or warnings are pertinent to my needs. Therefore, I'd like to implement a filter mechanism where I can selectively exclude events by inputting a portion of the log message body into a text box. This text input would then be added to a multi-select feature, enabling me to filter out undesired events effectively. I'd then use a token of a multi-select input and use that token in queries I already have.... See the Dashboard I provided you  Thank you in advance
yeah sure i have a lookup called panels.csv , Panels Critical severity vulnerabilities High severity vulnerabilities Vulnerabilities solved Local virtual machines Outdated opera... See more...
yeah sure i have a lookup called panels.csv , Panels Critical severity vulnerabilities High severity vulnerabilities Vulnerabilities solved Local virtual machines Outdated operation systems - Server Outdated operating systems - Endpoint Outdated operating systems - Unknown Defender enrollment status Clients with old Defender patterns Systems not found in patch management database Clients missing critical updates Servers with blacklisted Software Clients with blacklisted Software Total Installed blacklisted Software Blacklisted Software Exceptions i want to display them horizontally , which i was using your given search , but the result is coming in this pattern Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved i want to display it like this but want to have sections of each content just like table
Hi @Splunkerninja, do you want to calcuate the icense consuption or the number of events per index and per day? In the first case see at [Settings > License > License Consuption past 60 days > by I... See more...
Hi @Splunkerninja, do you want to calcuate the icense consuption or the number of events per index and per day? In the first case see at [Settings > License > License Consuption past 60 days > by Index], or run this: index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | timechart span=1d sum(b) AS volumeB by idx fixedrange=false | join type=outer _time [search index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | dedup _time stack | stats sum(stacksz) AS "stack size" by _time] | fields - _timediff | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)] In the second case, you could try something ike this: index=* | bin span=1d _time | chart count OVER index BY _time Ciao. Giuseppe
so what did you try and what gave you the wrong results This is the basic search index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* st=* | stats sum(b) as bytes by idx | eval... See more...
so what did you try and what gave you the wrong results This is the basic search index=_internal source=/opt/splunk/var/log/splunk/license_usage.log idx=* st=* | stats sum(b) as bytes by idx | eval gb=round(bytes/1024/1024/1024,3)  Run that over the time range you want
There is a table visualisation in Splunk and when you run that command you are getting a table visualisation. Perhaps you can describe your data better, because you are clearly looking for something... See more...
There is a table visualisation in Splunk and when you run that command you are getting a table visualisation. Perhaps you can describe your data better, because you are clearly looking for something different than just panels a b c. Your post describing this  Panels Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved doesn't actually tell me anything useful - can you describe your lookup data, what it contains and give a better description of how you want the data to look in your table.  
Hi @rphillips_splk , @hrawat  It's great to hear that it will be finally fixed, but when will you release those fixed versions? I don't find those tags on docker hub. Also, why didn't Splunk contai... See more...
Hi @rphillips_splk , @hrawat  It's great to hear that it will be finally fixed, but when will you release those fixed versions? I don't find those tags on docker hub. Also, why didn't Splunk containers crash when this kind of failure happen? We are running the splunk/splunk images (as heavy forwarders) on K8S and we only noticed the issue when we saw that the network thoughput was low on a pod. K8S didn't restart the pod automatically because it didn't crash. The container stayed there as a zombie and didn't do any forwarding. Thank you! Regards, DG
Hello Team, I would like to get clarified whether there is a possibility of ingesting application prometheus metrics onto Splunk Enterprise through Universal or Heavy Forwarders. Currently we are a... See more...
Hello Team, I would like to get clarified whether there is a possibility of ingesting application prometheus metrics onto Splunk Enterprise through Universal or Heavy Forwarders. Currently we are able to ingest Prometheus metrics through Splunk Otel Collector & Splunk HEC onto splunk Enterprise. Is there a similar solution using Forwarders? Kindly please suggest. Additionally can we also confirm if Splunk Otel Collector + Fluentd agent is available only as open-source agents?
Hello, by same error i mean that after changing the stanza config in distsearch.conf and restarting the service on the sh., there was the Invalid key message on btool but with different value
As your current inputs is set for scraping all the logs from the folder D:\logs and then you are sending various events from the those logs to null and now you want to be more selective in terms of o... See more...
As your current inputs is set for scraping all the logs from the folder D:\logs and then you are sending various events from the those logs to null and now you want to be more selective in terms of one log file that you want for info level information and still keep the others from sending some type of events, this becomes a little tricky without testing and having a tinker. Some options that may work: Option 1 You might want to move that log (jkl.txt) to another folder or a sub folder and monitor it separately with another monitor, props and transforms so you can control it, this would leave the other's where they are and you can ingest this one now and filter on it as well. Option 2 Rework your current props and transforms - you may be able to set by source in props, do this for all your other logs and send them to null, either way this all needs some level config and testing out.   [source::...my_otherlog.txt] TRANSFORMS-my_otherlog = my_otherlog_file_null
Hello. I have tried different combination of replicationDenyList stanza definition, in all cases it did not work. with quotes, "apps\TA-microsoft-graph-security-add-on-for-splunk\bin\...", without ... See more...
Hello. I have tried different combination of replicationDenyList stanza definition, in all cases it did not work. with quotes, "apps\TA-microsoft-graph-security-add-on-for-splunk\bin\...", without quotes apps\TA-microsoft-graph-security-add-on-for-splunk\bin\... , with * "apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*", with full path D:\Splunk Search Head\etc\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*, and combinations of them. But nothing, I always got the error:  Invalid key in stanza [replicationDenyList] in D:\Splunk Search Head\etc\system\local\distsearch.conf, line 29: MSbin (value: apps\TA-microsoft-graph-security-add-on-for-splunk\bin\*). Do you have a working example of this stanza? Thanks for your help.
Hi , I came across many queries to calculate daily ingest per index for last 7 days but I am not getting the expected results.   Can you please guide me with the query to calculate the daily ingest... See more...
Hi , I came across many queries to calculate daily ingest per index for last 7 days but I am not getting the expected results.   Can you please guide me with the query to calculate the daily ingest per index in GB for last 7 days?
Try cutting it down so that it remains valid and representative and then paste it here.
You have not shown anything that indicates that the search has the value you are seeking on the first row of your results. Please share your search and follow @bowesmana's suggestion about which toke... See more...
You have not shown anything that indicates that the search has the value you are seeking on the first row of your results. Please share your search and follow @bowesmana's suggestion about which token to use to retrieve the results.
the result coming is Panels Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities... See more...
the result coming is Panels Blacklisted Software Exceptions Clients missing critical updates Clients with blacklisted Software Clients with old Defender patterns Critical severity vulnerabilities Defender enrollment status High severity vulnerabilities Local virtual machines Outdated operating systems - Endpoint Outdated operating systems - Unknown Outdated operation systems - Server Servers with blacklisted Software Systems not found in patch management database Total Installed blacklisted Software Vulnerabilities solved but I want all the result  in different section of table
is there table virualization in splunk