All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Everyone,  I am experiencing an error when sending events from Mission Control to Splunk SOAR. I always get a failure when the send to SOAR action is automatically triggered through Adaptive Resp... See more...
Hi Everyone,  I am experiencing an error when sending events from Mission Control to Splunk SOAR. I always get a failure when the send to SOAR action is automatically triggered through Adaptive Response. Before I automated it, I tried to send event data from Mission Control to SOAR manually by clicking the three dots and then selecting 'Run Adaptive Response Actions' and everything went smoothly. Has anyone ever experienced a similar problem? Danke, Zake  
Hi @DarthHerm  Call me cynical but I suspect its a result of what has been done, rather than the Splunk upgrade files themselves, even rolling back the files might not correct things. I think the f... See more...
Hi @DarthHerm  Call me cynical but I suspect its a result of what has been done, rather than the Splunk upgrade files themselves, even rolling back the files might not correct things. I think the first thing to double check is the file permissions, does the service account running Splunk have access to all the relevant files on the UF? How are your apps deployed to the UF? Is this via a DS or manual? Can you confirm the app is installed. Are there any specific logs in the _internal index for one of these hosts, particularly anything that mentioned PerfMon!  Based on the docs at https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.4/get-windows-data/monitor-windows-performance it seems important that the service user has "Performance Monitor Users" role - are you able to confirm this, please? Another thing to double check - Can you run a btool ($SPLUNK_HOME\bin\splunk cmd btool inputs list --debug which should be a more detailed version of the inputs conf you provided. Has it loaded the relevant config in from your custom configuration? Lastly, is environment_performance_logs and event index (rather than metric index)?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @BraxcBT  Do you get the same behaviour when accessing from a different browser (or incognito mode in the same browser) ? Please could you also check the Developer Tools of your browser (it may ... See more...
Hi @BraxcBT  Do you get the same behaviour when accessing from a different browser (or incognito mode in the same browser) ? Please could you also check the Developer Tools of your browser (it may vary depending what you are using) and have a look at the Console tab, are there any errors in here? Also if you could check the network tab (you might need to reload the page after opening it) and see if any requests are red or return 4xx/5xx errors?  If you do, click on the page and click the Response tab to see what error it returns and let us know    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thought I would post here in the community as well since I have this opened with support. A couple weeks ago, another agency pushed updates to Splunk Universal Forwarder to half of my hosts without m... See more...
Thought I would post here in the community as well since I have this opened with support. A couple weeks ago, another agency pushed updates to Splunk Universal Forwarder to half of my hosts without my knowledge or consent. Those hosts were updated to 9.2.6.0 from 9.2.0.1. The updates went unnoticed for a couple weeks since the events from our custom application and Event Viewer continued to get indexed.  I started to notice an issue on one dashboard where no perfmon events were coming in. I reviewed another dashboard that checks the status of my forwarders and that's where I saw the updated installs. I went over the index the perfmon counters go to and validated only the hosts that were using Universal Forwarder 9.2.0.1 were coming in.  My version of Enterprise was 9.2.1.0 and support recommended I update Enterprise to a newer version. After some testing, I went to Enterprise 9.3.5.0. Not ready for 9.4.X with trying to update the kvstore. Reviewing the Universal Forwarder compatibility matrix, I've kept my Universal Forwarders on 9.2.0.1, 9.2.6.0, and two were updated to 9.3.5.0. Updating Enterprise didn't correct the issue.  I went through troubleshooting on the host looking over the config files. I did a rebuild of the resource counters and restarted the splunk forwarder service on one of the hosts using forwarder 9.2.6.0.  I've looking at one of the hosts by adding the service account used as a local member of the administrators and Remote Management Users groups, adding a path variable for SPLUNK_HOME at "c:\program files\splunkuniversalforwarder".  Chatted with the tech who pushed universal forwarder and they're not going to do that again. The hosts that got updated are members of my custom applications lower environments. I can live without the perfmon counters in the lower environments and none of my hosts in our production environment were updated. I know if I uninstall Forwarder and reinstall 9.2.0.1, the perfmon counters will resume coming in.  Convinced its a change I need to do and thought I would check with the community who have updated their forwarders. I attached a copy of the inputs.conf from one of my hosts which is the same for all of them (aside from the environment name)  
I used the metric finder to graph jvm.gc.duration_count, then exported the results to CSV.  I also have a SignalFlow API call to grab the same data. The counts are the same except they are offset by... See more...
I used the metric finder to graph jvm.gc.duration_count, then exported the results to CSV.  I also have a SignalFlow API call to grab the same data. The counts are the same except they are offset by 5 minutes.  IOW, my SignalFlow output says 303 GCs at 15:11 but the metric finder export shows the same 303 GCs at 15:16.  Subsequent periods are offset in the same way. My code is using ChannelMessage.DataMessage.getLogicalTimestampMs(). Postman output looks like this: data: { data: "data" : [ { data: "tsId" : "AAAAAMcvg8Q", data: "value" : 1.0 data: }, { data: "tsId" : "AAAAAKgFlvo", data: "value" : 303.0 data: } ], data: "logicalTimestampMs" : 1750709460000, data: "maxDelayMs" : 12000 data: } What's going on?   thanks  
I have never seen this before and I will be completely transparent that I put your question into an AI engine so the response may not be anything close to what you are looking for, but the AI seemed ... See more...
I have never seen this before and I will be completely transparent that I put your question into an AI engine so the response may not be anything close to what you are looking for, but the AI seemed to think you might be having web browser caching issues (which I have actually had the web caching problems, just never had them affect the pages you mentioned).  The recommendation is to try to clear your cache in your browser or the method I use the most often is to use incognito mode.  Again, no idea if this will help, but I do know that I have changed the navigation menus on an app and they would not update in my browser and I had to run in incognito mode or open up a different browser that hadn't cached my Splunk website to see the changes to the navigation.  Hope this helps.    
I am logged in as the admin user, but whenever I try to access Tokens, Users, or other settings pages, I get a blank page. I’m not sure what to do next. #Splunk #Enterprise
OK. So this is not (or at least might not be)  about the phonehomes as such but on the info shown in the DS console. I'd go for 1) Verifying on selected forwarders that the phonehomes are shown in ... See more...
OK. So this is not (or at least might not be)  about the phonehomes as such but on the info shown in the DS console. I'd go for 1) Verifying on selected forwarders that the phonehomes are shown in the splunkd.log 2) Checking the logs on the DS itself to see if it can see the phonehomes. 3) Checking if you have the selective routing properly configured on the DS. https://help.splunk.com/en/splunk-enterprise/administer/manage-distributed-deployments/9.2/configure-the-deployment-system/upgrade-pre-9.2-deployment-servers (it's not about upgraded instances only; we had this issue lately on a new installation of 9.3.something).
 How did you determine this? - This is what the Forwarder Management Web UI shows us, client phone home time stamp coincides with the restart. 
What do you mean by "clients phoning home only when you restart the DS"? How did you determine this? The clients phone home on schedule - it's asynchronous versus whatever the DS is doing.
Ok. Can you please stop posting random copy-pastes from LLMs? LLMs are a useful tool... if they supplement your knowledge and expertise. Otherwise you're only introducing confusing wrong advices into... See more...
Ok. Can you please stop posting random copy-pastes from LLMs? LLMs are a useful tool... if they supplement your knowledge and expertise. Otherwise you're only introducing confusing wrong advices into the thread. Your advice about both indexed extractions and kv mode at the same time is simply wrong - it will lead to duplicate fields. Your line breaker is also needlessly complicated. BREAK_ONLY_BEFORE has no effectt with line merging disabled. Your advice about an addon for Fortigate is completely off because the TA for Fortigate available on Splunkbase handles default Fortigate event format, not jsons. Adjusting the events to be parsed by that addon will require more than just installing said addon. And there is no _MetaData:tags key! LLMs are known for making things up. Copy-pasting their delusions here isn't helping anyone! Just stop leading people astray. @LOP22456 I assume that it's either multiple events per line in your input file or your events are multilined and therefore the usuall approach to split the file on line breaks doesn't work. Unfortunately, there's no bulletproof solution for this since handling structured data with regexes alone is bound to be wrong in border cases. You can assume that your input breaks when you have two "touching" braces without a comma between them (even better if they must be on separate lines - that could give you "stronger" line breaker) but there still could be a border case where you have such string inside your json. But in most cases something like LINE_BREAKER = }([\r\n\s]*){ should do. In most cases. In some border cases you might end up with broken events.
Still no success after attempting all the steps below. Checked splunkd log on a few fowarders as well as the Deployment server and neither indicated connection errors. One question I have is in regar... See more...
Still no success after attempting all the steps below. Checked splunkd log on a few fowarders as well as the Deployment server and neither indicated connection errors. One question I have is in regards to indexes. From the webui i see the _dsphonehome, _dsappevent, _dsclient, but I don't see those indexes in the indexes.conf file on the deployment server. Another note is I found this and wondering if this could help? Our Splunk instance is at version 9.3.1.   https://community.splunk.com/t5/Splunk-Enterprise/After-upgrading-my-DS-to-Enterprise-9-2-2-clients-can-t-connect/m-p/695607
Okay @datachacha  Ive been having a good think about this and I dont think I have an elegant solution - but I think I do have *a* solution: This uses a hidden token/text box to the side and a s... See more...
Okay @datachacha  Ive been having a good think about this and I dont think I have an elegant solution - but I think I do have *a* solution: This uses a hidden token/text box to the side and a search to determine the _time+2hours.  You can then use this in your other queries as earliest/latest as per the sample event on the dashboard using `$globalTimeSpl:results.earliest$` and `$globalTimeSpl:results.latest$` Here is the full JSON to have a play around with - does this do what you need? { "title": "testing", "description": "", "inputs": { "input_MPUmpGoR": { "options": { "defaultValue": "DEFAULT", "token": "calc_earliest" }, "title": "Earliest", "type": "input.text" }, "input_zIorjrMc": { "options": { "defaultValue": "-24h@h,now", "token": "tr_global" }, "title": "Main Time Selector", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" } } } } }, "visualizations": { "viz_BcDlqy4I": { "options": { "markdown": "Earliest = $globalTimeSpl:result.earliest$ \nLatest = $globalTimeSpl:result.latest$" }, "type": "splunk.markdown" }, "viz_NgmH6lHI": { "dataSources": { "primary": "ds_BlYVOfBA" }, "title": "This shows for time selected + 2hours", "type": "splunk.table" }, "viz_Nqdf4h2p": { "dataSources": { "primary": "ds_ccCiW2S8" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "row._time.value", "token": "calc_earliest" } ] }, "type": "drilldown.setToken" } ], "type": "splunk.column" }, "viz_zUx2Zt29": { "dataSources": { "primary": "ds_ZKBDXZy2_ds_BlYVOfBA" }, "type": "splunk.table" } }, "dataSources": { "ds_BlYVOfBA": { "name": "global", "options": { "query": "index=_internal earliest=$globalTimeSpl:result.earliest$ latest=$globalTimeSpl:result.latest$ \n| addinfo \n| head 1\n| table info* _raw" }, "type": "ds.search" }, "ds_ZKBDXZy2_ds_BlYVOfBA": { "name": "globalTimeSpl", "options": { "enableSmartSources": true, "query": "| makeresults \n| addinfo\n| eval earliest=IF($calc_earliest|s$!=\"DEFAULT\",$calc_earliest|s$,info_min_time)\n| eval latest=IF($calc_earliest|s$!=\"DEFAULT\",$calc_earliest$+7200, info_max_time)", "queryParameters": { "earliest": "$tr_global.earliest$", "latest": "$tr_global.latest$" } }, "type": "ds.search" }, "ds_ccCiW2S8": { "name": "tstat", "options": { "query": "| tstats count where index=_internal by _time span=1h", "queryParameters": { "earliest": "$tr_global.earliest$", "latest": "$tr_global.latest$" } }, "type": "ds.search" }, "ds_rt307Czb": { "name": "timeSPL", "options": { "enableSmartSources": true, "query": "| makeresults \n| addinfo", "queryParameters": { "earliest": "-60m@m", "latest": "now" } }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_zIorjrMc" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_Nqdf4h2p", "position": { "h": 300, "w": 1390, "x": 10, "y": 210 }, "type": "block" }, { "item": "viz_NgmH6lHI", "position": { "h": 140, "w": 1390, "x": 10, "y": 60 }, "type": "block" }, { "item": "viz_BcDlqy4I", "position": { "h": 50, "w": 300, "x": 20, "y": 10 }, "type": "block" }, { "item": "input_MPUmpGoR", "position": { "h": 82, "w": 198, "x": 1470, "y": 50 }, "type": "input" }, { "item": "viz_zUx2Zt29", "position": { "h": 100, "w": 680, "x": 1470, "y": 130 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @LizAndy123  When configuring your alert, select it to run "For each result" under the Trigger setting as per the screenshot below:  Did this answer help you? If so, please consider: Add... See more...
Hi @LizAndy123  When configuring your alert, select it to run "For each result" under the Trigger setting as per the screenshot below:  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may r... See more...
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may return - Example Text : Hello World How do I pass each individual  $result ? So Search could return Hello World followed by Hello World2 followed by Hello World3  If I put $result.text$ it prints Hello World but if I want to then show the second result or 3rd...is it possible through this>?
Ok, thank you. I thought there was something up with the $'s; they would accept it as a static value instead of a predefined token when setting them up in the interactions menu, but the logic wouldn'... See more...
Ok, thank you. I thought there was something up with the $'s; they would accept it as a static value instead of a predefined token when setting them up in the interactions menu, but the logic wouldn't work. And it seems to be the case as well for the second, the eval statement just did not work at all as intended. I was wondering why this didn't work.
Thank you for the timely response. I tried what you recommended and ran into a few issues that I was not able to diagnose or fix with the troubleshooting tips provided.  I got everything looking exa... See more...
Thank you for the timely response. I tried what you recommended and ran into a few issues that I was not able to diagnose or fix with the troubleshooting tips provided.  I got everything looking exactly as you said. However, $result._time$ doesn't seem to evaluate to a time whatsoever; when I check the value, it is literally just "$result._time$".  The latest time value gets set to "relative_time(-7d@h", which appears incomplete as shown. I get an error on the visualization saying Invalid earliest_time, and both earliest and latest show invalid values. When I tried to put in the troubleshooting eval command you recommended, it did not fix the issue. The time should be coming in correctly.
@LAME-Creations @LOP22456  Please do not set both INDEXED_EXTRACTIONS and KV_MODE = json. See props.conf docs for more info - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf W... See more...
@LAME-Creations @LOP22456  Please do not set both INDEXED_EXTRACTIONS and KV_MODE = json. See props.conf docs for more info - https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf When 'INDEXED_EXTRACTIONS = JSON' for a particular source type, do not also set 'KV_MODE = json' for that source type. This causes the Splunk software to extract the JSON fields twice: once at index time, and again at search time.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Just posting to confirm this, though I've never written in. Running into it now as generating a summary index is changing the value type to AFAICT a string, meaning the previous value of 5136, 5136,... See more...
Just posting to confirm this, though I've never written in. Running into it now as generating a summary index is changing the value type to AFAICT a string, meaning the previous value of 5136, 5136, which is searchable via EventCode=5136, is now broken in the summary index, where the value is now something like "5136\n5136" which... is not helpful at all.
Hi, I’m probably not understanding the question completely, so feel free to provide a specific example if you want. One thing I will point out is that when thinking about what information you can ge... See more...
Hi, I’m probably not understanding the question completely, so feel free to provide a specific example if you want. One thing I will point out is that when thinking about what information you can get from an inferred service--it’s limited to what you can see from the trace spans that are generated when an instrumented service calls that uninstrumented inferred service. Here is a screen shot of a service-centric view of an inferred service and what you can see about it.