All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @deepakc, thank you for your effort to respond. Maybe I was unclear with my question, let me rephrase it. TL;DR: how can changes on forwarder affect search-head's behavior during search-time? ... See more...
Hello @deepakc, thank you for your effort to respond. Maybe I was unclear with my question, let me rephrase it. TL;DR: how can changes on forwarder affect search-head's behavior during search-time? My primary goal is to understand how the extractions work particularly in this detail. Longer version: The issue here is with search-time extractions. To my knowledge, these extractions are done by search-head and/or respective indexer when the data is being searched (= after all writing to the index is finished, no matter how the data was cooked). Forwarders are not involved in searching, therefore I don't understand how forwarder can affect the search. The events themselves seem to be indexed the same way before and after upgrade, I don't see any distinction in _raw field contents and generally no reason for search-time extractions to stop working. What in my assumptions and knowledge above is not correct? Your suggestions apply - as of my knowledge - to index-time extractions, however that is not the case here.
Hello @Amadou , I believe SSE should be used in conjunction with Splunk Enterprise Security. The correlation searches within ES, ESCU, and SSE have MITRE tactics associated with each of them. The nu... See more...
Hello @Amadou , I believe SSE should be used in conjunction with Splunk Enterprise Security. The correlation searches within ES, ESCU, and SSE have MITRE tactics associated with each of them. The number of correlation searches with MITRE tactics are highlighted with dark color on the SSE dashboard and the less number would be light in shade.  I do not think this is possible to achieve on Splunk Enterprise. However, if you are able to achieve this through custom solution using .css or .js, let the community know.   Thanks, Tejas.
Hello @deangoris, The Splunk packaging toolkit will also work in the similar fashion. The packages created using the toolkit should not have the local folder within it. Otherwise it'll fail on the U... See more...
Hello @deangoris, The Splunk packaging toolkit will also work in the similar fashion. The packages created using the toolkit should not have the local folder within it. Otherwise it'll fail on the UI itself. The best way to deal with this situation is to have a Barebone app created from the UI and have all the KOs migrated to the custom private app. This way it helps modifying the objects from UI in future as well.   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!! 
Hi Team, Need your assistance for the configuration changes in Splunk. The requirement is to change the Timezone based on different “source” (not sourcetype). We have different sources defined in o... See more...
Hi Team, Need your assistance for the configuration changes in Splunk. The requirement is to change the Timezone based on different “source” (not sourcetype). We have different sources defined in our application. All of them are in their respective server timezone, except for the below 2 sources (these 2 are in EST timezone & our requirement is to change it into CET timezone)     source=/applications/testscan/*/testscn01/* source=/applications/testscan/*/testcpdom/*     For rest of the other sources, I do not want make any change in the Timezone. For example:     source=/applications/testscan/*/testscn02/* source=/applications/testscan/*/testnycus/* source=/applications/testscan/*/testnyus2/* source=/applications/testscan/*/testshape/* source=/applications/testscan/*/testshape2/* source=/applications/testscan/*/testshape3/*     Please note, we do not have any "props.conf" file available or configured in the server.  We are maintaining splunk configuration in only "inputs.conf" file. The present content of "inputs.conf" as below:     [monitor:///applications/testscan/.../] whitelist = (?:tools\/test\/log\/|TODAY\/LOGS\/)*\.(?:log|txt)$ index = testscan_prod sourcetype = testscan _TCP_ROUTING = in_prod [monitor:///applications/testscan/*/*/tools/test_transfer/log] index = testscan_prod sourcetype = testscan _TCP_ROUTING = in_prod [monitor:///applications/testscan/*/*/tools/test_reports/log] index = testscan_prod sourcetype = testscan _TCP_ROUTING = in_prod       Please suggest what changes to be done so that Timezone can be managed based on the "source" information provided. @ITWhisperer 
In this case a full file systems caused this file below to be empty, even after splunkd restart, it was still empty.. That was the cause of this error: [splunk@hf001 ~]$ ll /opt/splunk/quarantined_f... See more...
In this case a full file systems caused this file below to be empty, even after splunkd restart, it was still empty.. That was the cause of this error: [splunk@hf001 ~]$ ll /opt/splunk/quarantined_files/ total 8 -rwxr-x--- 1 splunk splunk 0 Jun 7 14:18 quarantine_manifest.json -rwxr-x--- 1 splunk splunk 208 Mar 16 2023 README.md Adding Enterprise default config to the file solved the issue: [splunk@hf001 ~]$ cat /opt/splunk/quarantined_files/quarantine_manifest.json {"enable_jQuery2": "not-restricted", "enable_unsupported_hotlinked_imports": "not-restricted"}
So basically I'd like to do concatenation between DeviceProcess and DeviceRegistry events in advanced hunting query | advhunt   
Hello @jrs42, In Dashboard studio, there's no option to specify a drilldown for a particular cell or a row. When you enable the drilldown, by default it gets applied to a cell. You can find the foll... See more...
Hello @jrs42, In Dashboard studio, there's no option to specify a drilldown for a particular cell or a row. When you enable the drilldown, by default it gets applied to a cell. You can find the following JSON source code as an example for a drilldown to set token in dashboard studio. { "visualizations": { "viz_dNS83Gj5": { "type": "splunk.table", "dataSources": { "primary": "ds_aQ7285AG" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "log_level_tok", "key": "row.log_level.value" } ] } } ] }, "viz_qGr86Sbm": { "type": "splunk.events", "options": {}, "dataSources": { "primary": "ds_MmJUCreO" } } }, "dataSources": { "ds_aQ7285AG": { "type": "ds.search", "options": { "query": "index=_internal source=\"*splunkd.log\"\n| stats count by log_level", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } }, "name": "Search_1" }, "ds_MmJUCreO": { "type": "ds.search", "options": { "query": "index=_internal source=\"*splunkd.log\" log_level=\"$log_level_tok$\"", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } }, "name": "Search_2" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_dNS83Gj5", "type": "block", "position": { "x": 0, "y": 0, "w": 300, "h": 300 } }, { "item": "viz_qGr86Sbm", "type": "block", "position": { "x": 300, "y": 0, "w": 1140, "h": 300 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Test Input Placeholder" }   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!!  
Hi @Cyner__, you have to enable receiving on Splunk Enterprise, then you have to check the route from the Universal Forwarder on port 9997 to the Spunk Enterprise (using telnet), then you have to ... See more...
Hi @Cyner__, you have to enable receiving on Splunk Enterprise, then you have to check the route from the Universal Forwarder on port 9997 to the Spunk Enterprise (using telnet), then you have to configure your outputs.con (as described in the above link) in the Universal Forwarder. Ciao. Giuseppe
Thanks for your answer. I'm not sure if this is what I want. Because the advanced hunting app requires an API call with a limit of calls, I start doing a call on DeviceProcessEvents. Then I'm not sur... See more...
Thanks for your answer. I'm not sure if this is what I want. Because the advanced hunting app requires an API call with a limit of calls, I start doing a call on DeviceProcessEvents. Then I'm not sure if I need to do another API call on DeviceRegistryEvents, since I'd like to joint these two instances. 
Hi @anandhalagaras1 , you should take the searches in Workload and adapt them to your requirements. Ciao. Giuseppe
I have the same question, which capabilities are needed for the "Add Data" button?
@gcusello    We are using Splunk Cloud version 9.1.2308.203. Following your instructions, I navigated to Cloud Monitoring Console --> License Usage and found the following options in the Cloud Moni... See more...
@gcusello    We are using Splunk Cloud version 9.1.2308.203. Following your instructions, I navigated to Cloud Monitoring Console --> License Usage and found the following options in the Cloud Monitoring Console App: - Entitlement - Ingest - Workload - Storage Summary - Searchable Storage (DDAS) - Archive Storage (DDAA) - Federated Search for Amazon S3 Our Cloud Monitoring Console app is version 3.25.0. Please let me know how to pull the top 20 or top 50 sources with the index and sourcetype information.
Hi rsreese,  i know this post is some years old already but maybe it can help someone in the future. The McAfee ePO or now called Trellix Orchestrator can only sent data to tcp ports via SSL.  So s... See more...
Hi rsreese,  i know this post is some years old already but maybe it can help someone in the future. The McAfee ePO or now called Trellix Orchestrator can only sent data to tcp ports via SSL.  So switch the input from [tcp://514] to [tcp-ssl:514]. Be sure to fulfill the configuration requirements for tcp-ssl inputs. 
I don´t need this course. It will absolutely not help me for what I have to do which is pretty advanced in terms of classic Splunk architecture.
Linecount is not a significant factor when comparing event formats.  Most significant are the timestamp format and location, and how fields are delimited (key=value, JSON, etc.).
Hi. @gcusello    yes i did all. what do you mean by client do you mean the server with forwarder or splunk enterprise ?   and when i try to telnet the splunk server via forwarder server "i think ... See more...
Hi. @gcusello    yes i did all. what do you mean by client do you mean the server with forwarder or splunk enterprise ?   and when i try to telnet the splunk server via forwarder server "i think its client" connection always times out. i saw my splunk server (my computer i guess) doesn't have any inputs.conf at directory C:\Program Files\Splunk\etc\system\local path.  what should i do?    best regards  
Hello @Lidiane.Wiesner, I did some digging around and I've seen people suggesting to make sure java is running on a supported environment.  https://docs.appdynamics.com/appd/24.x/24.5/en/applicati... See more...
Hello @Lidiane.Wiesner, I did some digging around and I've seen people suggesting to make sure java is running on a supported environment.  https://docs.appdynamics.com/appd/24.x/24.5/en/application-monitoring/install-app-server-agents/java-agent/java-supported-environments
Hi @Cyner__ , port 9997 must be opened on the Spunk Enterprise, not on the client, you can open the port in [Settings > Forwarding and Receiving > Receiving]. Infact the telnet test must be done on... See more...
Hi @Cyner__ , port 9997 must be opened on the Spunk Enterprise, not on the client, you can open the port in [Settings > Forwarding and Receiving > Receiving]. Infact the telnet test must be done on the client not from the Splunk Server. Did you completed al the steps described in the document or in my previous answer? Ciao. Giuseppe
Hi @anandhalagaras1 , if you see in the Monitoring Console App [Settings > Monitoring Console > Indexing > icense Usage > Historic License Usage] or in License Concuption Report [Settings > Licensin... See more...
Hi @anandhalagaras1 , if you see in the Monitoring Console App [Settings > Monitoring Console > Indexing > icense Usage > Historic License Usage] or in License Concuption Report [Settings > Licensing > Usage Report> Previous 60 days > Split by ...] youcan find the searches you need. Ciao. Giuseppe
I finished the setup several times with my org/key in the setup page and I don't have the password.conf The Splunk is hosted in a server and I am doing the setup form my laptop , I don't know if t... See more...
I finished the setup several times with my org/key in the setup page and I don't have the password.conf The Splunk is hosted in a server and I am doing the setup form my laptop , I don't know if that can be the reason why I didn't get the  password.conf