All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @dzhangw7 , why are you using a subsearch? you can put all the conditions in the main search: index=my_index "Check something" (("Extracted entities" AND "'date': None") OR extracted_entities.d... See more...
Hi @dzhangw7 , why are you using a subsearch? you can put all the conditions in the main search: index=my_index "Check something" (("Extracted entities" AND "'date': None") OR extracted_entities.date=null) | timechart count by classification eventually adding a condition on identity_id index=my_index "Check something" (("Extracted entities" AND "'date': None") OR extracted_entities.date=null) identity_id=* | timechart count by classification Ciao. Giuseppe
Hi @SN1 , in addition to the perfect answer of @kiran_panchavat , you could install the Splunk_TA_nix add-on ( https://splunkbase.splunk.com/app/833 ) and extract additional information from the li... See more...
Hi @SN1 , in addition to the perfect answer of @kiran_panchavat , you could install the Splunk_TA_nix add-on ( https://splunkbase.splunk.com/app/833 ) and extract additional information from the linux system you're using. Ciao. Giuseppe
Yes, that was my suspicion. Your general idea seems ok (provided that your transform definition contains separate lines which just squished into one on copy-paste). Additional question - aren't you... See more...
Yes, that was my suspicion. Your general idea seems ok (provided that your transform definition contains separate lines which just squished into one on copy-paste). Additional question - aren't you by amy chance using indexed extractions? If you are, data is sent as parsed and is not procesed by transforms further down the pipeline.
@cherrypick  If this is a one-time ingestion of the missing data, the simplest method is to use the Splunk Web UI to upload the JSON file directly into your index_name.  In the "Input Settings" step... See more...
@cherrypick  If this is a one-time ingestion of the missing data, the simplest method is to use the Splunk Web UI to upload the JSON file directly into your index_name.  In the "Input Settings" step, set the Index field to index_name (your existing index). Review the configuration, then click Submit to ingest the file into index_name.      
@SN1  Splunk indexers store data on disk in indexes, and the "total memory allocated" could refer to the total disk space available on the partition where Splunk stores its data (typically under $SP... See more...
@SN1  Splunk indexers store data on disk in indexes, and the "total memory allocated" could refer to the total disk space available on the partition where Splunk stores its data (typically under $SPLUNK_HOME/var/lib/splunk). The "memory it is using" would then be the disk space consumed by the indexes, and the "remaining disk space left" would be the free space on that partition.    | rest /services/server/status/partitions-space splunk_server=* | eval totalGB = round(capacity/1024/1024, 2) | eval freeGB = round(free/1024/1024, 2) | eval usedGB = round((capacity - free)/1024/1024, 2) | table splunk_server, totalGB, usedGB, freeGB To get the total memory allocated on an indexer and its current usage (which is different from disk space), you can use the following Splunk commands: For memory information: | rest /services/server/status/resource-usage/hostwide splunk_server=*     This will show you key metrics including: Total physical memory on the system Memory currently in use Available memory  If you're specifically interested in Splunk's memory usage: For disk space information (which seems to be what you're actually asking about): For specific index volume usage: Note that memory usage and disk space are different resources. Memory refers to RAM available for processing, while disk space refers to storage capacity for data. Your question mentions memory but ends with disk space, so I've provided commands for both.    
yes you right and i described why i can't in others answer about it
i described that why i cannot access the inputs file on UF its because of that we do not have permission to access the host
The reason I’m not using inputs.conf with a blacklist is that the hosts sending these logs are managed by another company. They control the Universal Forwarders (UF) and their input configurations, m... See more...
The reason I’m not using inputs.conf with a blacklist is that the hosts sending these logs are managed by another company. They control the Universal Forwarders (UF) and their input configurations, meaning we don’t have access to modify them. However, we still need to mask and drop these logs at our end.
I want to get total memory allocated on 1 indexer and how much memory it is using. so that i could get remaining disk space left.
@Rastegui  To pinpoint the user or process stopping the Splunk UF, you need to look beyond Splunk’s internal logs and Windows System Events alone.    Enable and Monitor Windows Security Event Log... See more...
@Rastegui  To pinpoint the user or process stopping the Splunk UF, you need to look beyond Splunk’s internal logs and Windows System Events alone.    Enable and Monitor Windows Security Event Logs   Required Log Source: Windows Security Event Log (WinEventLog:Security) The Security Event Log can capture events related to service control actions if auditing is enabled. Specifically, Event ID 4656 (with proper auditing) or Event ID 4670 (permissions changes) might indicate when a user or process interacts with the SplunkForwarder service. Ensure the Splunk UF is configured to forward Windows Security Event Logs   Useful Windows Security Event Log codes to monitor for identifying the user or process responsible for stopping the Splunk UF agent: Event ID 4688: Logs the creation of a new process. This can help identify the process responsible for stopping the Splunk UF agent. Event ID 4648: Logs the use of explicit credentials. This can help identify the user who performed the action. Event ID 4624: Logs successful account logons. This can help track user activity. Event ID 4625: Logs failed account logons. This can indicate unauthorized attempts to access the system. Event ID 1102: Logs audit log clearance. This can indicate an attempt to cover tracks. By monitoring these event codes, you should be able to get a clearer picture of the user or process responsible for stopping the Splunk UF agent. Please check this https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/ 
@Rastegui  The _internal index logs Splunk’s own operational data, including shutdown events. However, it typically only records that the service stopped (e.g., "Splunkd daemon is shutting down")... See more...
@Rastegui  The _internal index logs Splunk’s own operational data, including shutdown events. However, it typically only records that the service stopped (e.g., "Splunkd daemon is shutting down") and not who or what triggered it. This is because Splunk UF doesn’t natively track external triggers in detailit’s a lightweight agent focused on forwarding data, not auditing administrative actions.   Event ID 7036 in the Windows System Event Log indicates that a service (like SplunkForwarder) changed state (e.g., stopped), but it doesn’t consistently log the user or process responsible for stopping it. This event is generated by the Service Control Manager (SCM) and lacks the context of the initiating action unless additional auditing is enabled. Event ID 4688: This event logs the creation of a new process, which can help identify the process responsible for stopping the Splunk UF agent. Event ID 4648: This event logs the use of explicit credentials, which can help identify the user who performed the action Audit Logs: Enable auditing on the server to capture detailed information about user actions and process executions. This can provide more visibility into who performed the action
Can someone help create an equivalent query to the following, without using subsearch? There are probably too many results and the query does not complete.   index=my_index  [search index=my_index... See more...
Can someone help create an equivalent query to the following, without using subsearch? There are probably too many results and the query does not complete.   index=my_index  [search index=my_index ("Extracted entities" AND "'date': None") OR extracted_entities.date=null | stats count by entity_id | fields entity_id | format] "Check something" | timechart count by classification   Basically I want to extract the list of entity_ids from this search: [search index=my_index ("Extracted entities" AND "'date': None") OR extracted_entities.date=null] where dates are null and then use those IDs to correlate in a second search "Check something" which has a field "classification", and then I want to do a timechart on the result to see a line graph of events where a date was missing from an event, plus with a given classification.
I am trying to identify the user or process responsible for stopping the Splunk UF agent. What log source do I require to be able to see this. I have unsuccessfully tried: Searching in internal ... See more...
I am trying to identify the user or process responsible for stopping the Splunk UF agent. What log source do I require to be able to see this. I have unsuccessfully tried: Searching in internal index - You can only see the service going down.  index=_internal sourcetype=splunkd host="DC*" component=Shutdown* Monitoring the windows system event log for forwarder shutdown event (EventCode 7036 ) No visibility on who performed the action. Looking for ideas on how this can be achieve from Splunk.
This seems really close to working. It does work for the dataset that I provided but isn't working for my actual dataset. I have not figured out why just yet. My actual dataset is MUCH larger and con... See more...
This seems really close to working. It does work for the dataset that I provided but isn't working for my actual dataset. I have not figured out why just yet. My actual dataset is MUCH larger and convoluted.  As PickleRick pointed out, this is awful data!
Hi, I have a python modular input that populates an index (index_name). This ran into some gateway error issues causing some data to be missing in the index. Is it possible to ingest a JSON file c... See more...
Hi, I have a python modular input that populates an index (index_name). This ran into some gateway error issues causing some data to be missing in the index. Is it possible to ingest a JSON file containing the missing data directly into the index (index_name)?   Thanks, 
1. I suppose the easiest solution would be to just blacklist the directory within a specific inputs.conf stanza. (As others already pointed out) 2. Do your events come from monitor inputs on this HF... See more...
1. I suppose the easiest solution would be to just blacklist the directory within a specific inputs.conf stanza. (As others already pointed out) 2. Do your events come from monitor inputs on this HF or are they forwarded from other hosts? From HFs or UFs? 3. Ingest actions?
1. That's awful data. Either make your data normalized (causing a bunch of problems) or make it redundant (causing other problems - here you have both approaches mixed. 2. Don't put data into mai... See more...
1. That's awful data. Either make your data normalized (causing a bunch of problems) or make it redundant (causing other problems - here you have both approaches mixed. 2. Don't put data into main index. 3. You can either use coalesce or foreach. A coalesce example: index=main | spath | search 'data.fruit.common.type' IN ("apple","pear") | eval color=coalesce('data.pear.color','data.apple.color') EDIT: Fixed field references in coalesce() - without single quotes Splunk would interpret it as concatenating fields data, pear/apple and color.
Hi @tchamp  How about something like this? | spath "data.fruit.common.type" output=fruitType | eval colorPath="data.fruit." . fruitType . ".color" | eval fruitColor=json_extract(_raw,colorPath) ... See more...
Hi @tchamp  How about something like this? | spath "data.fruit.common.type" output=fruitType | eval colorPath="data.fruit." . fruitType . ".color" | eval fruitColor=json_extract(_raw,colorPath) Below is a screenshot of how this might work (based on a sample data gen) This is the full SPL for the example | makeresults | eval json="[{\"data\":{\"fruit\":{\"common\":{\"type\":\"apple\",\"foo\":\"bar1\"},\"apple\":{\"color\":\"red\",\"size\":\"medium\",\"smell\":\"sweet\"}}}},{\"data\":{\"fruit\":{\"common\":{\"type\":\"pear\",\"foo\":\"bar2\"},\"pear\":{\"color\":\"green\",\"size\":\"medium\",\"taste\":\"sweet\"}}}}]" | eval events=json_array_to_mv(json) | mvexpand events | eval _raw=events | fields _raw | spath "data.fruit.common.type" output=fruitType | eval colorPath="data.fruit." . fruitType . ".color" | eval fruitColor=json_extract(_raw,colorPath) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I am trying to figure out the best way to perform this search. I have some json log/events where the event data is slightly different based on the type of fruit (this is just an example). I have two ... See more...
I am trying to figure out the best way to perform this search. I have some json log/events where the event data is slightly different based on the type of fruit (this is just an example). I have two searches that return each thing that I want. I'm not sure if it is best to try and combine the two searches or if there is a better way all together.  Here is an example of my event data:   Event Type 1 { "data": { "fruit": { "common": { "type": "apple", "foo": "bar1" }, "apple": { "color": "red", "size": "medium", "smell": "sweet" } } } } Event Type 2 { "data": { "fruit": { "common": { "type": "pear", "foo": "bar2" }, "pear": { "color": "green", "size": "medium", "taste": "sweet" } } } }     I want to extract all of the "color" values from all of the log/json messages. I have two separate queries that extract each one but I want them in a single table. Here are my current queries/searches: index=main | spath "data.pear.color" | search "data.pear.color"=* | eval fruitColor='data.pear.color' | table _time, fruitColor index=main | spath "data.apple.color" | search "data.apple.color"=* | eval fruitColor='data.apple.color' | table _time, fruitColor I know that there must be a way to do something with the 'type' field to do what I want but can't seem to figure it out. Any suggestion is appreciated.    
I decided to use 2 tokens instead of 3. But how to use token2 (from users dropdown) only if it was chosen? index=sysmon_wec AND (EventCode=22 OR event_id=22) AND process_name="$procname$" ... See more...
I decided to use 2 tokens instead of 3. But how to use token2 (from users dropdown) only if it was chosen? index=sysmon_wec AND (EventCode=22 OR event_id=22) AND process_name="$procname$" | makemv tokenizer="([^\r\n]+)(\r\n)?" User | mvexpand User | where NOT (User="SYSTEM" OR User="NT AUTHORITY\SYSTEM" OR User="NT AUTHORITY\NETWORK SERVICE" OR User="NT AUTHORITY\LOCAL SERVICE") | head 100 | table process_name, User, ComputerName, QueryName, QueryResults   But add  something like this on splunk language :   | if isnotnull(User) then User="$user$"