All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Fre... See more...
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | timechart min(Value) as "Used Space" | predict "Used Space" algorithm=LLP5 future_timespan=180 Could anyone help with modified query.    
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two mess... See more...
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two messages: a RECEIVER log and a SENDER log. Here’s my current query: index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | transaction id startswith="RECEIVER" endswith="SENDER" mvlist=message | search eventcount > 1 | eval count=mvcount(message) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw The idea is to group together RECEIVER and SENDER logs using the transaction id that my logs creates (e.g., RECEIVER[52] and SENDER[52]), and then extract and separate the first and second messages of the transaction into request and response to have a better visualisation. The transaction command seems to be grouping the logs correctly, I get the right number of transactions, and both receiver and sender logs are present in the _raw field. For a few cases it works fine, I have as expected the proper request and response in two distinct fields, but for many transactions, the response (second message) is showing as NULL, even though eventcount is 2 and both messages are visible in _raw The message field is well present in both ends of the transaction, as I can see it in the _raw output. Can someone guide me on what is wrong with my query ?  
Hi @AJH2000 , have you a stand-alone server or a distributed architecture? if a stand alone server you should see all the indexes. If instead, you have a distributed architecture and you are worki... See more...
Hi @AJH2000 , have you a stand-alone server or a distributed architecture? if a stand alone server you should see all the indexes. If instead, you have a distributed architecture and you are working on the Search Head, you don't see all the indexes in the Indexers. The easiest approach is to add an empty index also on the Search Head, only to see this index in the dropdown lists. Ciao. Giuseppe
Hello, Am using content pack correlation search - entity degraded and all the neaps are enabled which are in content pack like epsiodes by alertgroup, epsiodes by alarm.,  Am seeing events are comi... See more...
Hello, Am using content pack correlation search - entity degraded and all the neaps are enabled which are in content pack like epsiodes by alertgroup, epsiodes by alarm.,  Am seeing events are coming into correlation search but  epsiodes are not getting created. Do we have any mandatory fields needs to configured? but still as mentioned am using inbuild correlaiton searches and NEAPs from content pack. Thanks,
Hi @ws , the best approach is: remove every input that sends logs to this index, in Cluster Manager, put the retention (frozenTimePeriodInSecs) of this index to zero and push the configuration to... See more...
Hi @ws , the best approach is: remove every input that sends logs to this index, in Cluster Manager, put the retention (frozenTimePeriodInSecs) of this index to zero and push the configuration to Indexers, after some minute, check that there isn't any log in the index, then remove the index from the Cluster Manager and push again the configuration. Ciao. Giuseppe
What exactly do you want to do? The command you provided will "empty" the index without touching its definition. Also, I haven't tried this in a cluster (I assume that's what you mean by 3 indexers ... See more...
What exactly do you want to do? The command you provided will "empty" the index without touching its definition. Also, I haven't tried this in a cluster (I assume that's what you mean by 3 indexers and "a management node") but I'd expect the cluster to start fixups as soon as you do the operation on the first node unless you enable maintenance mode. Anyway, if you want to leave the index definition but only remove the indexed events, that's one of the possibilities. Another one is to set very short retention period and let Splunk roll the buckets normally. If you want to remove the index along with its definition, you have to remove it from indexes.conf on the CM, push the config bundle (this will trigger rolling restart of indexers) and then manually remove index directories from each indexer.
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenab... See more...
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenable.com/plugins/nessus/33851) Notice that for some Splunk Enterprise Instances after upgrade,  there are 2 tar.gz files created in /opt/splunk/opt/packages that cause the below 2 processes to be started by Splunk (pkg-run) agentmanager-1.0.1+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.tar.gz identity-0.0.1-xxxxxx.tar.gz The 2 processes are started by Splunk user and it will re-spawn if process is killed using kill command /opt/splunk/var/run/supervisor/pkg-run/pkg-agent-manager2203322202/agent-manager /opt/splunk/var/run/supervisor/pkg-run/pkg-identity1066404666/identity How come upgrade of Splunk Enterprise will cause these 2 files to be created or is normal?
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a... See more...
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a distributed environment with 3 indexers and a management node? I normally run the following command at an all in one setup. /opt/splunk/bin/splunk clean eventdata -index index_name
@marycordova  Thank you for the valuable suggestion. The approach you've shared is indeed effective. However, in our current environment, implementing a user-based license model may not be feasible ... See more...
@marycordova  Thank you for the valuable suggestion. The approach you've shared is indeed effective. However, in our current environment, implementing a user-based license model may not be feasible due to internal policy and stakeholder alignment constraints. We are exploring alternatives that align with our existing licensing agreements.
Hi @AJH2000  It sounds like your HEC connection is working as expected, and you have confirmed that the data is being ingested, so I think your HEC configuration is all good. You havent mentioned y... See more...
Hi @AJH2000  It sounds like your HEC connection is working as expected, and you have confirmed that the data is being ingested, so I think your HEC configuration is all good. You havent mentioned your deployment architecture however I suspect you are using a SH/SHC connecting to an indexer cluster. When you configured the index, did you also create the index on the SH/SHC ? If you didnt then it would explain why the index is not visible in the Edit Role screen. Please make sure the index definition exists on the SH and then check again.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom ind... See more...
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom index: rapid7 Data is being successfully ingested via a Python script using the /services/collector/event endpoint The script defines index: rapid7 and sourcetype: rapid7:assets I can search the data using: index=rapid7 and get results. I can also confirm the sourcetype: index=rapid7 | stats count by sourcetype Problem: I am trying to add rapid7 to my role’s default search indexes, but when I go to: Settings → Roles → admin → Edit → Indexes searched by default The index rapid7 appear blank, I don't know that this is the all problem.  What I’ve verified: The index exists and receives data The data is visible in Search & Reporting if I explicitly specify index=rapid7 I am an admin user I confirmed the index is created (visible under Settings → Indexes) My Questions: What could cause an index to not appear in the "Indexes searched by default" list under role settings? Could this be related to the app context of the index (e.g., if created under http_event_collector)? Is there a way in Splunk Cloud to globally share an index created via HEC so it appears in role configuration menus? I want to be able to search sourcetype="rapid7:assets" without explicitly specifying my index=rapid7, by including it in my role's default search indexes. Any advice, experience or support links would be appreciated! Thanks!
@Nawab  Reconfiguring Splunk Enterprise Security is what I would advise you to do, however if the problem persists, open a support ticket. https://docs.splunk.com/Documentation/ES/8.0.40/Install/In... See more...
@Nawab  Reconfiguring Splunk Enterprise Security is what I would advise you to do, however if the problem persists, open a support ticket. https://docs.splunk.com/Documentation/ES/8.0.40/Install/InstallSplunkESinSHC#Installing_Splunk_Enterprise_Security_in_a_search_head_cluster_environment 
what was the required size of the storage per day in GB?
Hello, I have an air-gapped Splunk AppDynamics (25.1) HA on-premises instance deployed, fleet management service enabled, and smart agents installed on the VMs to manage the app server agents. I wa... See more...
Hello, I have an air-gapped Splunk AppDynamics (25.1) HA on-premises instance deployed, fleet management service enabled, and smart agents installed on the VMs to manage the app server agents. I want to be able to download the agents directly from AppDynamics Downloads from the controller UI instead of downloading manually (i.e. Using AppDynamics Portal), but I don't know which URLs should be whitelisted on the firewall. Can anyone help me with this? Thanks, Osama
I tried it again, usign the same method you suggested, deployed and configured the app on deployer and pushed the config bundle, but its still the same
Wait. As far as I remember (it's been some time since I did it last time) you don't manually copy anything. When you run the installer in deployer mode it takes care of preparing the shcluster bundle... See more...
Wait. As far as I remember (it's been some time since I did it last time) you don't manually copy anything. When you run the installer in deployer mode it takes care of preparing the shcluster bundle. That's why you run it exactly as described - upload the app to the deployer, run the installer on the deployer, apply shcluster-bundle. No manual copying stuff anywhere.
I followed these steps, installed ES on deployer, configured it. Mission control is not working on deployer, then I copied ES to shcluster/apps and pushed the configuration. now all DA-ESS and SA app... See more...
I followed these steps, installed ES on deployer, configured it. Mission control is not working on deployer, then I copied ES to shcluster/apps and pushed the configuration. now all DA-ESS and SA apps are present in apps of each SHC member, but still when I click ES app or mission control app on cluster member it says continue to setup page.   not sure why
@Nawab  Installing ES on a Search Head Cluster Deployer: 1. On the Splunk toolbar, select Apps > Manage Apps and click Install app from file 2. Click Choose File and select the Splunk Enterprise ... See more...
@Nawab  Installing ES on a Search Head Cluster Deployer: 1. On the Splunk toolbar, select Apps > Manage Apps and click Install app from file 2. Click Choose File and select the Splunk Enterprise Security file 3. Click Upload to begin the installation 4. Click Continue to app setup page 5. Click Start Configuration Process, and wait for it to complete 6. Use the Deployer to deploy ES to the cluster members. From the Deployer run: /opt/splunk/bin/splunk apply shcluster-bundle 
Followed every thing exactly described in docs
@Nawab  -  Please make sure that you followed all pre-requisites for SHC and ES on SHC.  https://docs.splunk.com/Documentation/ES/8.0.2/Install/InstallSplunkESinSHC