All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Assuming instance contains the disk you want to predict, you could try something like this index=main host="localhost" instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | eval ... See more...
Assuming instance contains the disk you want to predict, you could try something like this index=main host="localhost" instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | eval instance=substr(instance,0,1) | timechart min(value) as "Used Space" by instance | appendpipe [| fields _time C | where isnotnull(C) | predict C algorithm=LLP5 future_timespan=180] | appendpipe [| fields _time D | where isnotnull(D) | predict D algorithm=LLP5 future_timespan=180] | appendpipe [| fields _time E | where isnotnull(E) | predict E algorithm=LLP5 future_timespan=180]
It's hard to say what's "wrong" not knowing your data but while transaction can be sometimes useful (in some strange use cases) it's often easier, and faster to simply use stats. Mostly because trans... See more...
It's hard to say what's "wrong" not knowing your data but while transaction can be sometimes useful (in some strange use cases) it's often easier, and faster to simply use stats. Mostly because transaction has loads of limitations that stats don't have. Quick glance at your search suggests that for some reason the message field is not extracted properly from your event so you're not getting two separate values in your multivalued message output field. As I said I'd go with index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | eval request=if(searchmatch("SENDER[",message,null()) | eval response=if(searchmatch("\"RECEIVER[\" AND \"POST /my-end-point*\"",message,null()) | stats range(_time) as duration, count, values(request) as request, values(response) as response, values(_raw) as _raw by id  
transaction can silently ignore data, depending on data volume, time between start and end and you will not get any indication that data has been discarded. It's far better to use stats to group by ... See more...
transaction can silently ignore data, depending on data volume, time between start and end and you will not get any indication that data has been discarded. It's far better to use stats to group by id - which you appear to have. At the simplest level you can replace transaction with stats like this index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | stats list(_raw) as _raw list(message) as message min(_time) as start_time max(_time) as end_time by id | eval duration=end_time - start_time, eventcount=mvcount(_raw) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw  
Ok, a word of advise - it's usually better to specify indexes explicitly than to have them as searched by default. Especially with an admkn role! It spares you unnecessary load on your environment fo... See more...
Ok, a word of advise - it's usually better to specify indexes explicitly than to have them as searched by default. Especially with an admkn role! It spares you unnecessary load on your environment for searches in which you haven't specified the indexes and it saves you a lot of debugging when you have different roles with different default indexes and people report mismatch in searches functionality. You have been warned. One additional hint - it's way better to do a quick check with | tstats count where index=rapid7 by sourcetype than index=rapid7 | stats count by sourcetype The first one only checks the summarized indexed fields while yours needs to plow through all events from the index. And there is something that doesn't add up. On Cloud you cannot have the admin user role. You can only have sc_admin (which is a limited admin role). So if you're trying to edit the admin role you shouldn't be able to do so.  
Not sure that's what you expect, let me know if you need something else, here are two raw events that my query matched together, but response is not being displayed (while present in the output _raw)... See more...
Not sure that's what you expect, let me know if you need something else, here are two raw events that my query matched together, but response is not being displayed (while present in the output _raw) {"severity":"INFO","logger":"com.PayloadLogger","thread":"40362833","message":"RECEIVER[20084732]: POST /my-end-point Headers: {sedatimeout=[60000], x-forwarded-port=[443], jmsexpiration=[0], host=[hostname], content-type=[application/json], Content-Length=[1461], sending.interface=[ANY], Accept=[application/json], cookie=[....], x-forwarded-proto=[https]} {{\"content\":"Any content here"}}","properties":{"environment":"any","transactionOriginator":"any","customerId":"any","correlationId":"any","configurationId":"any"}} {"severity":"INFO","logger":"com.PayloadLogger","thread":"40362833","message":"SENDER[20084732]: Status: {200} Headers: {Date=[Mon, 05 May 2025 07:27:18 GMT], Content-Type=[application/json]} {{\"generalProcessingStatus\":\"OK\",\"content\":[]}}","properties":{"environment":"any","transactionOriginator":"any","customerId":"any","correlationId":"any","configurationId":"any}} I've been trying to use stats as well but have more trouble than with the transaction, which works pretty well (despite this missing response field). Can't say im a splunk expert
Firstly, let me start by stating the obvious - vulnerability scanners are notorious for being way overly trigger-happy with their findings. It takes an experienced person to filter their results and ... See more...
Firstly, let me start by stating the obvious - vulnerability scanners are notorious for being way overly trigger-happy with their findings. It takes an experienced person to filter their results and get the actual reasonable results. Having said that - those processes are spawned by the splunkd process (not directly -  via compsup daemon). So that finding is at least questionable if not simply a false positive.
Please provide some sample data (anonymised) which demonstrate your issue Having said that, you could try using stats to gather your events by id as this is can be more deterministic than transaction
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Fre... See more...
I have multiple disk like C, D & E on server and want to do the prediction for multiple disk in same query. index=main host="localhost"  instance="C:" sourcetype="Perfmon:LogicalDisk" counter="% Free Space" | timechart min(Value) as "Used Space" | predict "Used Space" algorithm=LLP5 future_timespan=180 Could anyone help with modified query.    
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two mess... See more...
Hello, I'm working on a Splunk query to track REST calls in our logs. Specifically, I’m trying to use the transaction command to group related logs — each transaction should include exactly two messages: a RECEIVER log and a SENDER log. Here’s my current query: index=... ("SENDER[" OR ("RECEIVER[" AND "POST /my-end-point*")) | rex "\[(?<id>\d+)\]" | transaction id startswith="RECEIVER" endswith="SENDER" mvlist=message | search eventcount > 1 | eval count=mvcount(message) | eval request=mvindex(message, 0) | eval response=mvindex(message, 1) | table id, duration, count, request, response, _raw The idea is to group together RECEIVER and SENDER logs using the transaction id that my logs creates (e.g., RECEIVER[52] and SENDER[52]), and then extract and separate the first and second messages of the transaction into request and response to have a better visualisation. The transaction command seems to be grouping the logs correctly, I get the right number of transactions, and both receiver and sender logs are present in the _raw field. For a few cases it works fine, I have as expected the proper request and response in two distinct fields, but for many transactions, the response (second message) is showing as NULL, even though eventcount is 2 and both messages are visible in _raw The message field is well present in both ends of the transaction, as I can see it in the _raw output. Can someone guide me on what is wrong with my query ?  
Hi @AJH2000 , have you a stand-alone server or a distributed architecture? if a stand alone server you should see all the indexes. If instead, you have a distributed architecture and you are worki... See more...
Hi @AJH2000 , have you a stand-alone server or a distributed architecture? if a stand alone server you should see all the indexes. If instead, you have a distributed architecture and you are working on the Search Head, you don't see all the indexes in the Indexers. The easiest approach is to add an empty index also on the Search Head, only to see this index in the dropdown lists. Ciao. Giuseppe
Hello, Am using content pack correlation search - entity degraded and all the neaps are enabled which are in content pack like epsiodes by alertgroup, epsiodes by alarm.,  Am seeing events are comi... See more...
Hello, Am using content pack correlation search - entity degraded and all the neaps are enabled which are in content pack like epsiodes by alertgroup, epsiodes by alarm.,  Am seeing events are coming into correlation search but  epsiodes are not getting created. Do we have any mandatory fields needs to configured? but still as mentioned am using inbuild correlaiton searches and NEAPs from content pack. Thanks,
Hi @ws , the best approach is: remove every input that sends logs to this index, in Cluster Manager, put the retention (frozenTimePeriodInSecs) of this index to zero and push the configuration to... See more...
Hi @ws , the best approach is: remove every input that sends logs to this index, in Cluster Manager, put the retention (frozenTimePeriodInSecs) of this index to zero and push the configuration to Indexers, after some minute, check that there isn't any log in the index, then remove the index from the Cluster Manager and push again the configuration. Ciao. Giuseppe
What exactly do you want to do? The command you provided will "empty" the index without touching its definition. Also, I haven't tried this in a cluster (I assume that's what you mean by 3 indexers ... See more...
What exactly do you want to do? The command you provided will "empty" the index without touching its definition. Also, I haven't tried this in a cluster (I assume that's what you mean by 3 indexers and "a management node") but I'd expect the cluster to start fixups as soon as you do the operation on the first node unless you enable maintenance mode. Anyway, if you want to leave the index definition but only remove the indexed events, that's one of the possibilities. Another one is to set very short retention period and let Splunk roll the buckets normally. If you want to remove the index along with its definition, you have to remove it from indexes.conf on the CM, push the config bundle (this will trigger rolling restart of indexers) and then manually remove index directories from each indexer.
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenab... See more...
Upgrading Splunk Enterprise using rpm -Uvh <<splunk-installer>>.rpm on RHEL seem to have caused this "Network daemons not managed by the package system" to be flagged out by Nessus (https://www.tenable.com/plugins/nessus/33851) Notice that for some Splunk Enterprise Instances after upgrade,  there are 2 tar.gz files created in /opt/splunk/opt/packages that cause the below 2 processes to be started by Splunk (pkg-run) agentmanager-1.0.1+XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.tar.gz identity-0.0.1-xxxxxx.tar.gz The 2 processes are started by Splunk user and it will re-spawn if process is killed using kill command /opt/splunk/var/run/supervisor/pkg-run/pkg-agent-manager2203322202/agent-manager /opt/splunk/var/run/supervisor/pkg-run/pkg-identity1066404666/identity How come upgrade of Splunk Enterprise will cause these 2 files to be created or is normal?
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a... See more...
Hi, After setting up a test index and ingesting a test record, I’m now planning to remove the index from the distributed setup. Could anyone confirm the correct procedure for removing an index in a distributed environment with 3 indexers and a management node? I normally run the following command at an all in one setup. /opt/splunk/bin/splunk clean eventdata -index index_name
@marycordova  Thank you for the valuable suggestion. The approach you've shared is indeed effective. However, in our current environment, implementing a user-based license model may not be feasible ... See more...
@marycordova  Thank you for the valuable suggestion. The approach you've shared is indeed effective. However, in our current environment, implementing a user-based license model may not be feasible due to internal policy and stakeholder alignment constraints. We are exploring alternatives that align with our existing licensing agreements.
Hi @AJH2000  It sounds like your HEC connection is working as expected, and you have confirmed that the data is being ingested, so I think your HEC configuration is all good. You havent mentioned y... See more...
Hi @AJH2000  It sounds like your HEC connection is working as expected, and you have confirmed that the data is being ingested, so I think your HEC configuration is all good. You havent mentioned your deployment architecture however I suspect you are using a SH/SHC connecting to an indexer cluster. When you configured the index, did you also create the index on the SH/SHC ? If you didnt then it would explain why the index is not visible in the Edit Role screen. Please make sure the index definition exists on the SH and then check again.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom ind... See more...
Hi community, I'm running into a permissions/visibility issue (I don't know) with an index created for receiving data via HTTP Event Collector (HEC) in Splunk Cloud. Context: I have a custom index: rapid7 Data is being successfully ingested via a Python script using the /services/collector/event endpoint The script defines index: rapid7 and sourcetype: rapid7:assets I can search the data using: index=rapid7 and get results. I can also confirm the sourcetype: index=rapid7 | stats count by sourcetype Problem: I am trying to add rapid7 to my role’s default search indexes, but when I go to: Settings → Roles → admin → Edit → Indexes searched by default The index rapid7 appear blank, I don't know that this is the all problem.  What I’ve verified: The index exists and receives data The data is visible in Search & Reporting if I explicitly specify index=rapid7 I am an admin user I confirmed the index is created (visible under Settings → Indexes) My Questions: What could cause an index to not appear in the "Indexes searched by default" list under role settings? Could this be related to the app context of the index (e.g., if created under http_event_collector)? Is there a way in Splunk Cloud to globally share an index created via HEC so it appears in role configuration menus? I want to be able to search sourcetype="rapid7:assets" without explicitly specifying my index=rapid7, by including it in my role's default search indexes. Any advice, experience or support links would be appreciated! Thanks!
@Nawab  Reconfiguring Splunk Enterprise Security is what I would advise you to do, however if the problem persists, open a support ticket. https://docs.splunk.com/Documentation/ES/8.0.40/Install/In... See more...
@Nawab  Reconfiguring Splunk Enterprise Security is what I would advise you to do, however if the problem persists, open a support ticket. https://docs.splunk.com/Documentation/ES/8.0.40/Install/InstallSplunkESinSHC#Installing_Splunk_Enterprise_Security_in_a_search_head_cluster_environment 
what was the required size of the storage per day in GB?