All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What do you want to override it with?
Hello @ITWhisperer   how can override time of appended search  
@python wrote: Can I also identify the owner and the last user who accessed the dashboard, as well as the exact date it was accessed? Hi @python  To achieve this you can use the following SP... See more...
@python wrote: Can I also identify the owner and the last user who accessed the dashboard, as well as the exact date it was accessed? Hi @python  To achieve this you can use the following SPL: index=_audit provenance=* app=* info=completed earliest=-60d provenance!="N/A" app!="N/A" provenance!="UI:Search" provenance!="Scheduler" | eval provenance=replace(replace(provenance,"UI:Dashboard:",""),"UI:dashboard:","") | stats latest(user) as last_user, latest(_time) as latest_access, dc(search_id) as searches by provenance, app | append [| rest /servicesNS/-/-/data/ui/views splunk_server=local count=0 | fields eai:acl.app title name eai:acl.owner isVisible | rename eai:acl.app as app, title as provenance, eai:acl.owner as owner ] | stats values(*) as * by provenance, app | where searches>1 | eval latest_access_readble=strftime(latest_access,"%Y-%m-%d %H:%M:%S")  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Don't override the earliest and latest in the first part of the search (then it will take the times from the input field. You can then override the earliest and latest in the appended search to be a ... See more...
Don't override the earliest and latest in the first part of the search (then it will take the times from the input field. You can then override the earliest and latest in the appended search to be a different time frame.
how about alerts with no triggered actions?
<dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> <style> #forecast g.highcharts-series-1.highcharts-line-s... See more...
<dashboard version="1.1" theme="light"> <label>forecast</label> <row> <panel depends="$alwaysHidden$"> <html> <style> #forecast g.highcharts-series-1.highcharts-line-series path.highcharts-graph, #forecast g.highcharts-series-3.highcharts-legend-item path.highcharts-graph { stroke: red; } #data g.highcharts-series-0.highcharts-line-series path.highcharts-graph { stroke: red; stroke-width: 3; data-z-index: 3; } </style> </html> </panel> <panel id="forecast"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> <row> <panel id="data"> <viz type="Splunk_ML_Toolkit.ForecastViz"> <search> <query>| inputlookup internet_traffic.csv | timechart span=120min avg("bits_transferred") as bits_transferred | eval bits_transferred=round(bits_transferred) | predict "bits_transferred" as prediction algorithm=LLP5 holdback=112 future_timespan=224 upper95=upper95 lower95=lower95 | `forecastviz(224, 112, "bits_transferred", 95)`</query> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </viz> </panel> </row> </dashboard>
It's a tough question. On the one hand - DS is another layer of complexity. And it's usually used when you have bigger environments and want to centralize managment of your forwarders. On the other... See more...
It's a tough question. On the one hand - DS is another layer of complexity. And it's usually used when you have bigger environments and want to centralize managment of your forwarders. On the other hand - fiddling manually with forwarders can teach you some bad practices. And - especially with standardized forwarders like the docker-based ones - it can be actually easier to manage the UFs with DS. Anyway, DS is just a functionality of a Splunk Enterprise instance which you don't have to additionally "install". You can enable/disable it by setting [global] disabled = <boolean> * Toggles the deployment server off and on. * Set to true to disable. * Default: false in serverclass.conf You can also enable it in WebUI.
Hi, we just started testing/experimenting with Splunk. Followed a Splunk4Rookies workshop but that focussed on the SPL and dashboards, not on ingesting data. We got the docker-compose installation ... See more...
Hi, we just started testing/experimenting with Splunk. Followed a Splunk4Rookies workshop but that focussed on the SPL and dashboards, not on ingesting data. We got the docker-compose installation up and running. I have installed a universal forwarder on a linux server and was able to send /var/log to the splunk install.   I find various post that state * I should be using the Splunk Add-on for Unix and Linux * it needs to be installed on the forwarder * I should be using a deployment server instead of configuring locally on the linux server.   Looking for information on how to actually install a deployment server. I seem to be going in circles between pages with old comments (pre 2016, https://community.splunk.com/t5/Deployment-Architecture/How-to-configure-a-deployment-server/m-p/131015/thread-id/4975) and broken links, or page explaining why I would need a deployment server. Questions : Do I need to bother with deployment server at this stage ? Is it really bad if I install "Splunk Add-on for Unix and Linux" locally ? and how do I actually locally, the insatt Can you point me to a basic step by step explanation of how I can install a deployment server ? This is intended for a test, can we add the deployment server capability to our Splunk server created with docker compose ?
Can I also identify the owner and the last user who accessed the dashboard, as well as the exact date it was accessed?
These are two terms from separate domains. A datamodel is an abstract standardized model of data whereas dashboard is a way of visualizing data and interacting with your Splunk. So your question is l... See more...
These are two terms from separate domains. A datamodel is an abstract standardized model of data whereas dashboard is a way of visualizing data and interacting with your Splunk. So your question is like "what is a difference between a truck and a vacuum cleaner".
Hi All,   could you please clarify me what is the diff between data models and splunk dashboards?   Thanks
Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But ... See more...
Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But i want to pass those as input values by selecting input time provided on dashboard  and then remaining part of query I want to run for whole day or lets say another time range . becuse it is possible that request i have received during mentioned time might get process later at dayy.How can I achieve this . Also I want to hide few columns at end like message guid , request time and output time .   <panel> <table> <title>Contact -Timings</title> <search> <query>```query for apigateway call``` index=aws* earliest="03/28/2025:13:30:00" latest="03/28/2025:14:35:00" Method response body after transformations: sourcetype="aws:apigateway" | rex field=_raw "Method response body after transformations: (?&lt;json&gt;[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 and action="Create" | rename _time as request_time ```dedupe is added to remove duplicates ``` | dedup messageGUID | append ```query for event brigdel``` [ search index="aws_np" | rex field=_raw "messageGUID\": String\(\"(?&lt;messageGUID&gt;[^\"]+)" | rex field=_raw "source\": String\(\"(?&lt;source&gt;[^\"]+)" | rex field=_raw "type\": String\(\"(?&lt;type&gt;[^\"]+)" | where source="MDM" and type="Contact" ```and messageGUID="0461870f-ee8a-96cd-3db6-1ca1f6dbeb30"``` | rename _time as output_time | dedup messageGUID ] | stats values(request_time) as request_time values(output_time) as output_time by messageGUID | where isnotnull(output_time) and isnotnull(request_time) | eval timeTaken=(output_time-request_time)/60| convert ctime(output_time)| convert ctime(request_time) | eventstats avg(timeTaken) min(timeTaken) max(timeTaken) count(messageGUID) | head 1</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel>    
Hi @bhaskar5428, Check out the following: index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "\[(?<country>[^,]+),\s(?<cobDate>[^,]+),\s(?<sdpType>[^,]+)," | rex fie... See more...
Hi @bhaskar5428, Check out the following: index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "\[(?<country>[^,]+),\s(?<cobDate>[^,]+),\s(?<sdpType>[^,]+)," | rex field=_raw "Number of records:\s*(?<Recordcount>\d+)" | rex field=_raw "^(?<dateTime>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z)" | eval DateTime=strptime(dateTime, "%Y-%m-%dT%H:%M:%S.%NZ") | eval CreatedTime=strftime(DateTime, "%H:%M") | eval CreatedDate=strftime(DateTime, "%Y-%m-%d") Example with makeresults: | makeresults | eval _raw="2025-03-28T22:04:25.685Z INFO 1 --- [ool-1-thread-11] c.d.t.l.s.s.e.e.NoopLoggingEtlEndpoint : Completed generation for [DE, 2025-03-28, LOAN_EVENT_SDP, 1]. Number of records: 186" | rex field=_raw "\[(?<country>[^,]+),\s(?<cobDate>[^,]+),\s(?<sdpType>[^,]+)," | rex field=_raw "Number of records:\s*(?<Recordcount>\d+)" | rex field=_raw "^(?<dateTime>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z)" | eval DateTime=strptime(dateTime, "%Y-%m-%dT%H:%M:%S.%NZ") | eval CreatedTime=strftime(DateTime, "%H:%M") | eval CreatedDate=strftime(DateTime, "%Y-%m-%d") | table _raw dateTime country cobDate sdpType Recordcount CreatedTime CreatedDate   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @pdafale_avantor  Please could you confirm, is it the TA for Corelight or the Corelight App For Splunk that you have installed on your searchhead?   The TA is what you would install on your inde... See more...
Hi @pdafale_avantor  Please could you confirm, is it the TA for Corelight or the Corelight App For Splunk that you have installed on your searchhead?   The TA is what you would install on your indexing / HF tier hosts for any index-time parsing requirements and this app actually is specifically hidden from the UI with the following app.conf settings. [ui] is_visible = 0  This is becasue the app is not intended to be used visually. Instead you would install the  Corelight App For Splunk on your searchead(s) which does contain a number of Corelight dashboards, lookups and even ca custom-command. Interestingly the TA also includes a lot of this content but is not a dedicated visible app - if you specifically want the dashboards then you will need to install Corelight App For Splunk on your searchead(s). If you have actually installed this and you're not able to see it then please let u know and we can investigate further with you.  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @python  I see you have already accepted an answer to this, however I feel the answer isnt quite right, by using disabled=0 you are missing a bunch of searches which would otherwise be scheduled ... See more...
Hi @python  I see you have already accepted an answer to this, however I feel the answer isnt quite right, by using disabled=0 you are missing a bunch of searches which would otherwise be scheduled but have been disabled, so I feel you need to look for is_scheduled = 0 OR (disabled=1 AND is_scheduled = 1) as these are searches which would be scheduled if they werent disabled. | rest /services/saved/searches | search is_scheduled=0 OR (is_scheduled=1 AND disabled=1) alert_type=* | table disabled, is_scheduled, eai:acl.owner, eai:acl.app, title, qualifiedSearch, alert_type  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @vishalduttauk  Can you confirm what the interval of your inputs are configured for this data feed? Does it run every 24 hours? I think it would be worth looking in the _internal index for error... See more...
Hi @vishalduttauk  Can you confirm what the interval of your inputs are configured for this data feed? Does it run every 24 hours? I think it would be worth looking in the _internal index for errors coming from the add-on incase there is an issue here You could start out with something like the below and then narrow down as necessary: index=_interna logLvel=error *salesforce*  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @python  Here is a search I use for this - I've added a 60d earliest on the audit events which is how far it will look back for searches on a particular dashboard (provenance) within a specific a... See more...
Hi @python  Here is a search I use for this - I've added a 60d earliest on the audit events which is how far it will look back for searches on a particular dashboard (provenance) within a specific app. index=_audit provenance=* app=* info=completed earliest=-60d | eval provenance=replace(replace(provenance,"UI:Dashboard:",""),"UI:dashboard:","") | append [| rest /servicesNS/-/-/data/ui/views splunk_server=local count=0 | fields eai:acl.app label title eai:acl.owner isVisible | rename eai:acl.app as app, title as provenance, name as dashboard_id, eai:acl.owner as owner ] | stats dc(search_id) as searches by provenance, app | where searches=0  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @shraddha09  I'm not really sure I understand what you're looking for, however generally eval should be done BEFORE you run your where statement. Please could you share the SPL which is not work... See more...
Hi @shraddha09  I'm not really sure I understand what you're looking for, however generally eval should be done BEFORE you run your where statement. Please could you share the SPL which is not working as expected so we can help further?  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @swamybhatta  The mongo command isnt intended to be run directly from the CLI - The error you are seeing is likely due to the missing ENV variables such as  LD_LIBRARY_PATH. Does mongo give an e... See more...
Hi @swamybhatta  The mongo command isnt intended to be run directly from the CLI - The error you are seeing is likely due to the missing ENV variables such as  LD_LIBRARY_PATH. Does mongo give an error when the processes is created by Splunk as part of the service's boot sequence?  Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
These representations of your events are not formatted as JSON data (your original post was not either although it was a lot closer). Please repost your events in unformatted form e.g. with double qu... See more...
These representations of your events are not formatted as JSON data (your original post was not either although it was a lot closer). Please repost your events in unformatted form e.g. with double quotes around field names and strings, etc. This makes it a lot easier for volunteers to try out solutions on your data before posting suggestions, which will be more efficient in the long run.