All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunkers :-), We have nice feature it dashboard studio - "Select all matches" in multiselect filter. But, unfortunately not in classic dashboard.  Can we built similar logic in classic dashboa... See more...
Hi Splunkers :-), We have nice feature it dashboard studio - "Select all matches" in multiselect filter. But, unfortunately not in classic dashboard.  Can we built similar logic in classic dashboard?
Hi @ITSplunk117 , yes it's possible to override the original sourcetype value with a new one using the procedure at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Advancedsourcetypeov... See more...
Hi @ITSplunk117 , yes it's possible to override the original sourcetype value with a new one using the procedure at https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Advancedsourcetypeoverrides Only one attention pont: sourcetype overriding, as all transformations, must be performed on the first full Splunk instance that data re passing through, not necessarily on the Indexers. In other words, if you have one or more intermediate heavy Forwarders, you must locate the transformation in the first Heavy Forwarder, not on the Indexers, because transformations are applied on the first Heavy Forwarder. Ciao. Giuseppe
Hi @Vin , it's really difficoult to create a regex without a data sample! anyway, if the ID to extract is the number in the square brackets and you have only one copuple of square brackets you coul... See more...
Hi @Vin , it's really difficoult to create a regex without a data sample! anyway, if the ID to extract is the number in the square brackets and you have only one copuple of square brackets you could use this: | rex "\[(?<your_field>[^\]]+)" I could be more sure if you can share some data. Ciao. Giuseppe
@sureshkumaar  Verify whether the logs are being received and processed by the syslog forwarders at the specified location. /SERVER50/firewall/ /SERVER52/firewall/  
@sureshkumaar  Ensure that the Splunk user (splunk) has the correct read permissions on /SERVER50/firewall/ and /SERVER52/firewall/. Go to syslog forwarder :-  Run the below  ls -l /SERVER50/fire... See more...
@sureshkumaar  Ensure that the Splunk user (splunk) has the correct read permissions on /SERVER50/firewall/ and /SERVER52/firewall/. Go to syslog forwarder :-  Run the below  ls -l /SERVER50/firewall/ ls -l /SERVER52/firewall/ If necessary, update permissions: sudo chmod -R 755 /SERVER50/firewall/ sudo chmod -R 755 /SERVER52/firewall/ sudo chown -R splunk:splunk /SERVER50/firewall/ sudo chown -R splunk:splunk /SERVER52/firewall/  Check splunkd.log for errors related to file monitoring. grep -i "monitor" $SPLUNK_HOME/var/log/splunk/splunkd.log grep -i "SERVER50" $SPLUNK_HOME/var/log/splunk/splunkd.log grep -i "SERVER52" $SPLUNK_HOME/var/log/splunk/splunkd.log  
Hi @DPOIRE , how can you identify the group of the received and sent jobs? are they present always and in the predefined order? if one or more are missing hoc can you define which group is missing... See more...
Hi @DPOIRE , how can you identify the group of the received and sent jobs? are they present always and in the predefined order? if one or more are missing hoc can you define which group is missing? maybe the missing ones are always the last ones? Ciao. Giuseppe
Below stanza's are collecting data related to firewall logs. first stanza is from one deployment servers and last 2 stanza's are from another same deployment server. But only 2nd stanza is working ... See more...
Below stanza's are collecting data related to firewall logs. first stanza is from one deployment servers and last 2 stanza's are from another same deployment server. But only 2nd stanza is working [monitor:///SERVER50/firewall/] whitelist = SERVER50M01ZT*\.log$ index = nw_fortigate sourcetype = fortigate_traffic disabled = false [monitor:///SERVER51/firewall/] whitelist = SERVER51M01ZT.*\.log$ disabled = false index = nw_fortigate sourcetype = fortigate_traffic [monitor:///SERVER52/firewall/] whitelist = SERVER52M01ZT.*\.log$ disabled = false index = nw_fortigate sourcetype = fortigate_traffic
@thanh_on  The Monitoring Console uses the splunkd process to gather resource usage stats (e.g., via the rest endpoint /services/server/info). An upgrade might have disrupted this data collection, c... See more...
@thanh_on  The Monitoring Console uses the splunkd process to gather resource usage stats (e.g., via the rest endpoint /services/server/info). An upgrade might have disrupted this data collection, causing it to report stale or incorrect values.   Run this search on the Search Head to manually verify what Splunk detects:   | rest /services/server/info splunk_server=local | table host physicalMemoryMB    
@thanh_on  If your Search Head is running on a virtual machine (VM) or container, the upgrade process might not have properly refreshed the resource allocation visible to Splunk. For instance, the O... See more...
@thanh_on  If your Search Head is running on a virtual machine (VM) or container, the upgrade process might not have properly refreshed the resource allocation visible to Splunk. For instance, the OS might report only 4GB to Splunk due to a misconfigured VM or cgroup setting.   On the Search Head, run a system command like free -m (on Linux) or systeminfo (on Windows) to confirm the OS sees 16GB. If the OS reports 16GB but Splunk shows 4GB, it’s likely a Splunk-side issue. If the OS itself shows 4GB, check your VM or hardware configuration.   Linux:-   free -m Windows:-   systeminfo
@thanh_on  The Monitoring Console relies on data collected from the Splunk instance, often via internal logs or REST API endpoints. After an upgrade, it’s possible that the console is still displayi... See more...
@thanh_on  The Monitoring Console relies on data collected from the Splunk instance, often via internal logs or REST API endpoints. After an upgrade, it’s possible that the console is still displaying cached or outdated information about the system resources.   Restart the Splunk instance (splunk restart) on the Search Head to force a refresh of system metrics. Then, check the Monitoring Console again after a few minutes to see if the memory updates to 16GB.
Dear fellas, I have an issue on Monitoring Console that show wrong information of instance after upgrade from 9.2.2 up to 9.4.1 (latest version). Is this a bug or I need to change some configur... See more...
Dear fellas, I have an issue on Monitoring Console that show wrong information of instance after upgrade from 9.2.2 up to 9.4.1 (latest version). Is this a bug or I need to change some configuration ? Thanks & Best regards.  
You're right, @livehybrid I have the order reversed.  Here's the docs I couldn't find earlier: https://docs.splunk.com/Documentation/Splunk/9.4.1/Forwarding/Routeandfilterdatad#Keep_specific_events_a... See more...
You're right, @livehybrid I have the order reversed.  Here's the docs I couldn't find earlier: https://docs.splunk.com/Documentation/Splunk/9.4.1/Forwarding/Routeandfilterdatad#Keep_specific_events_and_discard_the_rest  
You @livehybrid are correct. It should be 1st send all to nullQueue and then select those events which you want to keep.
Last week this worked fine, but since 7.0.3 of @splunk/create came out two days ago, linting doesn't work anymore. npx @splunk/create New app with component yarn run setup That still completes, b... See more...
Last week this worked fine, but since 7.0.3 of @splunk/create came out two days ago, linting doesn't work anymore. npx @splunk/create New app with component yarn run setup That still completes, but setup shows several warnings about things having unmet peer dependencies or incorrect peer dependencies.  But yarn run lint now throws an error and doesn't work: "Error: Failed to load parser '@babel/eslint-parser' declared in '.eslintrc.js >>  @splunk/eslint-config/browser-prettier >> ./browser.js >> ./base.js': Cannot find module '@babel/eslint-parser'\n" The release notes simply say this: splunk_create.spec.conf is now correctly named splunk_create.conf.spec (SUI-5385). But when I compare the package.json from a component created last week to one created today, I see several changes: dependencies: @splunk/react-ui changed from "^4.30.0" to "^4.43.0" @splunk/themes changed from "^0.18.0" to "^0.23.0" devDependencies: @splunk/eslint-config changed from "^4.0.0" to "^5.0.0" @splunk/splunk-utils changed from "^3.0.1" to "^3.2.0" @splunk/stylelint-config changed from "^4.0.0" to "^5.0.0" stylelint changed from "^13.0.0" to "^15.11.0" There may be other things that changed as well, those are just the ones that jumped out at me.  Anybody know how to fix this?  You can still do yarn run start:demo on the component and it runs, but the lint is broken. Thanks!
In the example those are referenced on line 1 to ensure that only data with those fields is returned, the stats command then counts them and creates new fields, (for example "jobs" which contains the... See more...
In the example those are referenced on line 1 to ensure that only data with those fields is returned, the stats command then counts them and creates new fields, (for example "jobs" which contains the count of search_id. field value jobs 50 total_run_time 12.4 After the stats these are renamed as follows: field value metric_name:jobs 50 metric_name:total_run_time 12.4 This is because a metric must be a key-value pair, where the name is metric_name:<yourMetricName> which is equal to a numeric value. You can also add dimensions, but lets not worry about that for now! The mcollect statement then captures the metrics_name:*=<value> fields into your metric index. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@richgalloway  Just a thought, but arent the transforms applied in order, so with keep-5000, delete-others it will set indexQueue if its eventType 5000, and then set nullQueue for everything (incl e... See more...
@richgalloway  Just a thought, but arent the transforms applied in order, so with keep-5000, delete-others it will set indexQueue if its eventType 5000, and then set nullQueue for everything (incl eventType 5000)? I think the queue can be updated multiple times with props/transforms so might need to be delete-others first so it set queue to nullQueue for everything, and then update to indexQueue for eventType 5000? I might be wrong and cant find any solid evidence to backup my theory at the mo, other than trying it out which I will do if I get chance!
Thanks! and is search_id and total_run_time variables created or is it based on the specific field used in the log events?
Hi @Jailson  What time format is your deletion_date in? If so and you plan to use this approach in a dashboard then you can use tokens from the time picker and relative_time to use the time picker ... See more...
Hi @Jailson  What time format is your deletion_date in? If so and you plan to use this approach in a dashboard then you can use tokens from the time picker and relative_time to use the time picker as a filter. Note that you will still need to apply an earliest/latest to your main part of the search, this will only filter. <form version="1.1" theme="light"> <label>xmltest</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | eval deletion_date=now()-7200 | where deletion_date&gt;relative_time(now(),"$field1.earliest$")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
And to confirm - I`ve updated the lookup csv file so that empty values now contain a "*" value, but still not working.
Hi Will, Great idea, although I`m not having much success I`m afraid and the output field contains empty values. Lookup definition screenshot attached (the fieldnames are correct) - can you spo... See more...
Hi Will, Great idea, although I`m not having much success I`m afraid and the output field contains empty values. Lookup definition screenshot attached (the fieldnames are correct) - can you spot any issues ?