All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is actually syslog-ng's internal problem and has nothing to do with Splunk. Check system logs, check the syslog-ng configuration (I'm not a syslog-ng expert but I think it had an option to valid... See more...
This is actually syslog-ng's internal problem and has nothing to do with Splunk. Check system logs, check the syslog-ng configuration (I'm not a syslog-ng expert but I think it had an option to validate your configuration).
Ok. Let me offer you some additional pointers here. 1. Whenever I see a dedup command I raise my eyebrows questioningly - are you sure you know how dedup works and is it really what you want? 2. Yo... See more...
Ok. Let me offer you some additional pointers here. 1. Whenever I see a dedup command I raise my eyebrows questioningly - are you sure you know how dedup works and is it really what you want? 2. Your subsearch is highly suboptimal considering you're just looking for a single - relatively unique value of the guid. As it is now, you're plowing through all data for given time range, extracting some fields (which you will not use later) with regex and finally only catching a small subset of those initial events. An example from my home lab environment. If I search index=mail | rex "R=(?<r>\S+)" | where r="1u0tIb-000000005e9-07kx" over all-time Splunk has to throw the regex at almost 11 millions of events and it takes 197 seconds. If I narrow the search at the very beginning and do index=mail 1u0tIb-000000005e9-07kx | rex "R=(?<r>\S+)" | where r="1u0tIb-000000005e9-07kx" The search takes just half a second and scans only 8 events. Actually, if you had your extractions configured for your events properly, you could just do the search like index="aws_np" aws_source="MDM" type="Contact" and it would work. You apparently don't have your data onboarded properly so you have to do it like in your search but this is ineffective. The same applies to the initial search where you do a lot of heavy lifting before hitting the where command. By moving the raw  "200" and "Create" strings to the initial search you may save yourself a lot of time. 3. To add insult to injury - your appended search is prone to subsearch limits so it might get silently finalized and you will get wrong/incomplete results without even knowing it. 4. You are doing several separate runs of the spath command which is relatively heavy. I'm not sure here but I'd hazard a guess that one "big" spath and filtering fields immediately afterwards in order to not drag them along and limit memory usage might be better performancewise. 5. You're statsing only three fields - request_time, output_time and messageGUID. Why extract the text field?
Hi, I setup the syslog-ng to receive syslog from devices and splunk HF on the same server will read those logs files. However I am not able to restart the syslog-ng and getting error.  syslog-ng ... See more...
Hi, I setup the syslog-ng to receive syslog from devices and splunk HF on the same server will read those logs files. However I am not able to restart the syslog-ng and getting error.  syslog-ng is running as root and log file directory owned by splunk user. Job for syslog-ng.service failed because the control process exited with error code. and systemctl status syslog-ng.service × syslog-ng.service - System Logger Daemon Loaded: loaded (/usr/lib/systemd/system/syslog-ng.service; enabled; preset: enabled) Active: failed (Result: exit-code) since Sat 2025-04-05 11:39:04 UTC; 9s ago Docs: man:syslog-ng(8) Process: 1800 ExecStart=/usr/sbin/syslog-ng -F $SYSLOGNG_OPTS (code=exited, status=1/FAILURE) Main PID: 1800 (code=exited, status=1/FAILURE) Status: "Starting up... (Sat Apr 5 11:39:04 2025" CPU: 4ms Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Scheduled restart job, restart counter is at 5. Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Start request repeated too quickly. Apr 05 11:39:04 if2 systemd[1]: syslog-ng.service: Failed with result 'exit-code'. Apr 05 11:39:04 if2 systemd[1]: Failed to start syslog-ng.service - System Logger Daemon.
What do you want to override it with?
Hello @ITWhisperer   how can override time of appended search  
@python wrote: Can I also identify the owner and the last user who accessed the dashboard, as well as the exact date it was accessed? Hi @python  To achieve this you can use the following SP... See more...
@python wrote: Can I also identify the owner and the last user who accessed the dashboard, as well as the exact date it was accessed? Hi @python  To achieve this you can use the following SPL: index=_audit provenance=* app=* info=completed earliest=-60d provenance!="N/A" app!="N/A" provenance!="UI:Search" provenance!="Scheduler" | eval provenance=replace(replace(provenance,"UI:Dashboard:",""),"UI:dashboard:","") | stats latest(user) as last_user, latest(_time) as latest_access, dc(search_id) as searches by provenance, app | append [| rest /servicesNS/-/-/data/ui/views splunk_server=local count=0 | fields eai:acl.app title name eai:acl.owner isVisible | rename eai:acl.app as app, title as provenance, eai:acl.owner as owner ] | stats values(*) as * by provenance, app | where searches>1 | eval latest_access_readble=strftime(latest_access,"%Y-%m-%d %H:%M:%S")  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Don't override the earliest and latest in the first part of the search (then it will take the times from the input field. You can then override the earliest and latest in the appended search to be a ... See more...
Don't override the earliest and latest in the first part of the search (then it will take the times from the input field. You can then override the earliest and latest in the appended search to be a different time frame.
how about alerts with no triggered actions?
<dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> <style> #forecast g.highcharts-series-1.highcharts-line-s... See more...
<dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> <style> #forecast g.highcharts-series-1.highcharts-line-series path.highcharts-graph, #forecast g.highcharts-series-3.highcharts-legend-item path.highcharts-graph { stroke: red; } #data g.highcharts-series-0.highcharts-line-series path.highcharts-graph { stroke: red; stroke-width: 3; data-z-index: 3; } </style> </html> </panel> <panel id="forecast"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> <row> <panel id="data"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> </dashboard>
It's a tough question. On the one hand - DS is another layer of complexity. And it's usually used when you have bigger environments and want to centralize managment of your forwarders. On the other... See more...
It's a tough question. On the one hand - DS is another layer of complexity. And it's usually used when you have bigger environments and want to centralize managment of your forwarders. On the other hand - fiddling manually with forwarders can teach you some bad practices. And - especially with standardized forwarders like the docker-based ones - it can be actually easier to manage the UFs with DS. Anyway, DS is just a functionality of a Splunk Enterprise instance which you don't have to additionally "install". You can enable/disable it by setting [global] disabled = <boolean> * Toggles the deployment server off and on. * Set to true to disable. * Default: false in serverclass.conf You can also enable it in WebUI.
Hi, we just started testing/experimenting with Splunk. Followed a Splunk4Rookies workshop but that focussed on the SPL and dashboards, not on ingesting data. We got the docker-compose installation ... See more...
Hi, we just started testing/experimenting with Splunk. Followed a Splunk4Rookies workshop but that focussed on the SPL and dashboards, not on ingesting data. We got the docker-compose installation up and running. I have installed a universal forwarder on a linux server and was able to send /var/log to the splunk install.   I find various post that state * I should be using the Splunk Add-on for Unix and Linux * it needs to be installed on the forwarder * I should be using a deployment server instead of configuring locally on the linux server.   Looking for information on how to actually install a deployment server. I seem to be going in circles between pages with old comments (pre 2016, https://community.splunk.com/t5/Deployment-Architecture/How-to-configure-a-deployment-server/m-p/131015/thread-id/4975) and broken links, or page explaining why I would need a deployment server. Questions : Do I need to bother with deployment server at this stage ? Is it really bad if I install "Splunk Add-on for Unix and Linux" locally ? and how do I actually locally, the insatt Can you point me to a basic step by step explanation of how I can install a deployment server ? This is intended for a test, can we add the deployment server capability to our Splunk server created with docker compose ?
Can I also identify the owner and the last user who accessed the dashboard, as well as the exact date it was accessed?
These are two terms from separate domains. A datamodel is an abstract standardized model of data whereas dashboard is a way of visualizing data and interacting with your Splunk. So your question is l... See more...
These are two terms from separate domains. A datamodel is an abstract standardized model of data whereas dashboard is a way of visualizing data and interacting with your Splunk. So your question is like "what is a difference between a truck and a vacuum cleaner".
Hi All,   could you please clarify me what is the diff between data models and splunk dashboards?   Thanks
Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But ... See more...
Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But i want to pass those as input values by selecting input time provided on dashboard  and then remaining part of query I want to run for whole day or lets say another time range . becuse it is possible that request i have received during mentioned time might get process later at dayy.How can I achieve this . Also I want to hide few columns at end like message guid , request time and output time .   <panel> <table> <title>Contact -Timings</title> <search> <query>```query for apigateway call``` index=aws* earliest="03/28/2025:13:30:00" latest="03/28/2025:14:35:00" Method response body after transformations: sourcetype="aws:apigateway" | rex field=_raw "Method response body after transformations: (?&lt;json&gt;[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 and action="Create" | rename _time as request_time ```dedupe is added to remove duplicates ``` | dedup messageGUID | append ```query for event brigdel``` [ search index="aws_np" | rex field=_raw "messageGUID\": String\(\"(?&lt;messageGUID&gt;[^\"]+)" | rex field=_raw "source\": String\(\"(?&lt;source&gt;[^\"]+)" | rex field=_raw "type\": String\(\"(?&lt;type&gt;[^\"]+)" | where source="MDM" and type="Contact" ```and messageGUID="0461870f-ee8a-96cd-3db6-1ca1f6dbeb30"``` | rename _time as output_time | dedup messageGUID ] | stats values(request_time) as request_time values(output_time) as output_time by messageGUID | where isnotnull(output_time) and isnotnull(request_time) | eval timeTaken=(output_time-request_time)/60| convert ctime(output_time)| convert ctime(request_time) | eventstats avg(timeTaken) min(timeTaken) max(timeTaken) count(messageGUID) | head 1</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel>    
Hi @bhaskar5428, Check out the following: index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "\[(?<country>[^,]+),\s(?<cobDate>[^,]+),\s(?<sdpType>[^,]+)," | rex fie... See more...
Hi @bhaskar5428, Check out the following: index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "\[(?<country>[^,]+),\s(?<cobDate>[^,]+),\s(?<sdpType>[^,]+)," | rex field=_raw "Number of records:\s*(?<Recordcount>\d+)" | rex field=_raw "^(?<dateTime>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z)" | eval DateTime=strptime(dateTime, "%Y-%m-%dT%H:%M:%S.%NZ") | eval CreatedTime=strftime(DateTime, "%H:%M") | eval CreatedDate=strftime(DateTime, "%Y-%m-%d") Example with makeresults: | makeresults | eval _raw="2025-03-28T22:04:25.685Z INFO 1 --- [ool-1-thread-11] c.d.t.l.s.s.e.e.NoopLoggingEtlEndpoint : Completed generation for [DE, 2025-03-28, LOAN_EVENT_SDP, 1]. Number of records: 186" | rex field=_raw "\[(?<country>[^,]+),\s(?<cobDate>[^,]+),\s(?<sdpType>[^,]+)," | rex field=_raw "Number of records:\s*(?<Recordcount>\d+)" | rex field=_raw "^(?<dateTime>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z)" | eval DateTime=strptime(dateTime, "%Y-%m-%dT%H:%M:%S.%NZ") | eval CreatedTime=strftime(DateTime, "%H:%M") | eval CreatedDate=strftime(DateTime, "%Y-%m-%d") | table _raw dateTime country cobDate sdpType Recordcount CreatedTime CreatedDate   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @pdafale_avantor  Please could you confirm, is it the TA for Corelight or the Corelight App For Splunk that you have installed on your searchhead?   The TA is what you would install on your inde... See more...
Hi @pdafale_avantor  Please could you confirm, is it the TA for Corelight or the Corelight App For Splunk that you have installed on your searchhead?   The TA is what you would install on your indexing / HF tier hosts for any index-time parsing requirements and this app actually is specifically hidden from the UI with the following app.conf settings. [ui] is_visible = 0  This is becasue the app is not intended to be used visually. Instead you would install the  Corelight App For Splunk on your searchead(s) which does contain a number of Corelight dashboards, lookups and even ca custom-command. Interestingly the TA also includes a lot of this content but is not a dedicated visible app - if you specifically want the dashboards then you will need to install Corelight App For Splunk on your searchead(s). If you have actually installed this and you're not able to see it then please let u know and we can investigate further with you.  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @python  I see you have already accepted an answer to this, however I feel the answer isnt quite right, by using disabled=0 you are missing a bunch of searches which would otherwise be scheduled ... See more...
Hi @python  I see you have already accepted an answer to this, however I feel the answer isnt quite right, by using disabled=0 you are missing a bunch of searches which would otherwise be scheduled but have been disabled, so I feel you need to look for is_scheduled = 0 OR (disabled=1 AND is_scheduled = 1) as these are searches which would be scheduled if they werent disabled. | rest /services/saved/searches | search is_scheduled=0 OR (is_scheduled=1 AND disabled=1) alert_type=* | table disabled, is_scheduled, eai:acl.owner, eai:acl.app, title, qualifiedSearch, alert_type  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @vishalduttauk  Can you confirm what the interval of your inputs are configured for this data feed? Does it run every 24 hours? I think it would be worth looking in the _internal index for error... See more...
Hi @vishalduttauk  Can you confirm what the interval of your inputs are configured for this data feed? Does it run every 24 hours? I think it would be worth looking in the _internal index for errors coming from the add-on incase there is an issue here You could start out with something like the below and then narrow down as necessary: index=_interna logLvel=error *salesforce*  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @python  Here is a search I use for this - I've added a 60d earliest on the audit events which is how far it will look back for searches on a particular dashboard (provenance) within a specific a... See more...
Hi @python  Here is a search I use for this - I've added a 60d earliest on the audit events which is how far it will look back for searches on a particular dashboard (provenance) within a specific app. index=_audit provenance=* app=* info=completed earliest=-60d | eval provenance=replace(replace(provenance,"UI:Dashboard:",""),"UI:dashboard:","") | append [| rest /servicesNS/-/-/data/ui/views splunk_server=local count=0 | fields eai:acl.app label title eai:acl.owner isVisible | rename eai:acl.app as app, title as provenance, name as dashboard_id, eai:acl.owner as owner ] | stats dc(search_id) as searches by provenance, app | where searches=0  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing