All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Just wanted to put add a xpath command solution that also works, simply as a future reference for users that can go with the spath command solution. | makeresults | eval _raw="<?xml version=\"1.... See more...
Just wanted to put add a xpath command solution that also works, simply as a future reference for users that can go with the spath command solution. | makeresults | eval _raw="<?xml version=\"1.0\" encoding=\"utf-8\"?> <soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soapenv:Body> <ns3:LogResponse xmlns:ns2=\"http://randomurl.com/sample1\" xmlns:ns3=\"http://randomurl.com/sample2\"> <LogResponse > <ResponseCode>OK</ResponseCode> <State>Simple</State> <Transactions> <TransactionName>CHANGED</TransactionName> </Transactions> <Transactions> <TransactionData>CHANGE_SIMPLE</TransactionData> </Transactions> <ServerTime>1649691711637</ServerTime> <SimpleResponseCode>OK</SimpleResponseCode> <nResponseCode> <nResponseCode>OK</nResponseCode> </nResponseCode> <USELESS>VALUES</USELESS> <MORE_USELESS>false</MORE_USELESS> </LogResponse> </ns3:LogResponse> </soapenv:Body> </soapenv:Envelope>" | eval xml=replace(_raw, "^<\?xml.+\?>[\r\n]*", "") ``` xpath does not like ?xml encoding version and text declaration, so remove``` | xpath field=xml outfield=ResponseCode "//*[local-name()='ResponseCode']" ``` use *[local-name()='<value>' to ignore namespace declarations, i.e. xmlns='smomething' ] ``` | xpath field=xml outfield=SimpleResponseCode "//*[local-name()='SimpleResponseCode']" | xpath field=xml outfield=nResponseCode "//*[local-name()='nResponseCode']/nResponseCode"  
could you explain the sloution sturcture how it works when there is no internet for 14hr in the factory data room how the logging and monitoring will work where can be stored the logs when there is ... See more...
could you explain the sloution sturcture how it works when there is no internet for 14hr in the factory data room how the logging and monitoring will work where can be stored the logs when there is no connenction  @gcusello
Hi @LinkLoop, You can verify Splunk is connected to outputs with the list forward-server command: & "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" list forward-server -auth admin:passwo... See more...
Hi @LinkLoop, You can verify Splunk is connected to outputs with the list forward-server command: & "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" list forward-server -auth admin:password Active forwards: splunk.example.com:9997 (ssl) Configured but inactive forwards: None The command requires authentication, so you'll need to know the local Splunk admin username and password defined at install time. If the local management port is disabled, the command will not be available. You can otherwise search local logs for forwarding activity: Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log" -Pattern "Connected to idx=" Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log" -Pattern "group=per_index_thruput, series=`"_internal`""  
Hi @MichaelM1, MaxUserPort adjusts limits on ephemeral ports. From the perspective of the intermediate forwarder, this would be the maximum port number allocated for an outbound connection to a down... See more...
Hi @MichaelM1, MaxUserPort adjusts limits on ephemeral ports. From the perspective of the intermediate forwarder, this would be the maximum port number allocated for an outbound connection to a downstream receiver. The intermediate forwarder would only listen on one port or however many input ports you have defined. TcpTimedWaitDelay adjusts the amount of time a closed socket will be held until it can be reused by another winsock client/server. As a quick test, I installed Splunk Universal Forwarder 9.4.0 for Windows on a clean install of Windows Server 2019 Datacenter Edition named win2019 with the following settings: # %SPLUNK_HOME%\etc\system\local\inputs.conf [splunktcp://9997] disabled = 0 # %SPLUNK_HOME%\etc\system\local\outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = splunk:9997 [tcpout-server://splunk:9997] where splunk is a downstream receiver. To simulate 1000+ connections, I installed Splunk Universal Forwarder 9.4.0 for Linux on a separate system with the following settings: # $SPLUNK_HOME/etc/system/local/limits.conf [thruput] maxKBps = 0 # $SPLUNK_HOME/etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = win2019:9997 [tcpout-server://win2019:9997] # $SPLUNK_HOME/etc/system/local/server.conf # additional default settings not shown [general] parallelIngestionPipelines = 2000 [queue] maxSize = 1KB [queue=AQ] maxSize = 1KB [queue=WEVT] maxSize = 1KB [queue=aggQueue] maxSize = 1KB [queue=fschangemanager_queue] maxSize = 1KB [queue=parsingQueue] maxSize = 1KB [queue=remoteOutputQueue] maxSize = 1KB [queue=rfsQueue] maxSize = 1KB [queue=vixQueue] maxSize = 1KB parallelIngestionPipelines = 2000 creates 2000 connections to win2019:9997. (Don't do this in real life. It's a Splunk instance using 2000x the resources of a typical instance. You'll consumer memory very quickly as stack space is allocated for new threads.) So far, I have no issues creating 2000 connections. Do you have a firewall or transparent proxy between forwarders and your intermediate forwarder? If yes, does the device limit the number of inbound connections per destination ip:port:protocol tuple?
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside p... See more...
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside payload throws syntax error.
Hi @antoniolamonica, Data model root search datasets start with a base search. Endpoint.Processes is: (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_in... See more...
Hi @antoniolamonica, Data model root search datasets start with a base search. Endpoint.Processes is: (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_integrity_level) This search is expanded by Splunk to include the contents of the cim_EndPoint_indexes macros and all event types that match tag=process and tag=report. To compare like for like searches, start with the same base search: (`cim_Endpoint_indexes`) tag=process tag=report earliest=-4h@h latest=-2h@h | eval process_integrity_level=lower(process_integrity_level) | stats count values(process_id) as process_id by dest and construct a similar tstats search: | tstats summariesonly=f count values(Processes.process_id) as process_id from datamodel=Endpoint.Processes where earliest=-4h@h latest=-2h@h by Processes.dest The underlying searches should be similar. Optimization may vary. You can verify the SPL in the job inspector: Job > Inspect Job > search.log and the UnifiedSearch component's log output. When summariesonly=f, the searches have similar behavior. When summariesonly=t, the data model search only looks at indexed field values. This is similar to using field::value for indexed fields and TERM() for indexed terms in a normal search.
Hi @Rafaelled, Both parameters should work. See my previous post at https://community.splunk.com/t5/Getting-Data-In/integrating-Splunk-with-Elasticsearch/m-p/696647/highlight/true#M115609 for a few ... See more...
Hi @Rafaelled, Both parameters should work. See my previous post at https://community.splunk.com/t5/Getting-Data-In/integrating-Splunk-with-Elasticsearch/m-p/696647/highlight/true#M115609 for a few limitations. Depending on the number and size of documents you need to migrate, the app may not be appropriate. A custom REST input would give you the most flexibility with respect to the Elasticsearch Search API. There are pre-written tools like https://github.com/elasticsearch-dump/elasticsearch-dump that may help. If you have a place to host it, an instance of Logstash and a configuration that combines an elasticsearch input with an http output (for Splunk HEC) would be relatively easy to manage. If you don't have a large amount of data or if you're willing to limit yourself to 1 TB per day, a free Cribl Stream license could also do the job. I'm happy to help brainstorm relatively simple solutions here.
When trying to collect more than one field into a MV field, the problem of correlating one entry against the entries in another field can be solved in a number of ways. stats values() will always sor... See more...
When trying to collect more than one field into a MV field, the problem of correlating one entry against the entries in another field can be solved in a number of ways. stats values() will always sort/dedup the values, hence the loss of order, so using stats list() CAN be a solution if you have knowledge of your data - it will collect up to 100 max items in the list but in event sequence order, so will retain correlation between each MV field. Making composite fields is another way, as you have done with mvzip. You can make this work with any number of fields getting as complex as needed, e.g.  | eval c=mvzip(mvzip(mvzip(mvzip(A, B, "##"), C, "##"), D, "##"), E, "##") In your case, I would suggest the practive of PRE-pending time, not POST-pending, as there is an immediate benefit to the output from stats value() in that the output results will be sorted in ascending time order. It has a useful benefit in that you can use mvmap to iterate results in a known order. Also, always a good idea to remove fields BEFORE mvexpand. If you don't need a field, remove it before incurring the memory cost of mvexpand. Another improvement would be to | eval session_time=mvdedup(session_time) before you mvexpand - there's no point in expanding stuff you will discard.  
I am not sure how you would get two 'records' for the same jobname as all the aggregations you are doing are by JOBNAME. But do you mean you have two (or more) values of END_TIME for the same single ... See more...
I am not sure how you would get two 'records' for the same jobname as all the aggregations you are doing are by JOBNAME. But do you mean you have two (or more) values of END_TIME for the same single JOBNAME in the output. That would be because you are doing  | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME i.e. values(*_TIME)... is giving you all the values of the START and END TIME values. If you just want the latest END_TIME then you just need to use max and min as needed, i.e.  | stats values(Date_of_reception) as Date_of_reception max(END_TIME) as END_TIME min(START_TIME) as START_TIME by JOBNAME would give you the earliest start and the latest end. But if you have getting more than one event then any use of values will give you all values, but note also that values() will sort the values in the multivalue field, so bear that in mind.
You have two options: DBConnect - You can bring audit logs  from Oracle DBA_AUDIT_TRAIL table.  Make sure the DBA has configured to send the audit logs to DBA_AUDIT_TRAIL.  If other RDS databases ha... See more...
You have two options: DBConnect - You can bring audit logs  from Oracle DBA_AUDIT_TRAIL table.  Make sure the DBA has configured to send the audit logs to DBA_AUDIT_TRAIL.  If other RDS databases have option to store audit logs to a table/view you can also bring the logs via dbconnect. AWS RDS Logs -- Have AWS Cloudwatch collect the RDS logs put them in an S3 bucket.  Bring the RDS logs to Splunk via SQS-Based S3 inputs using Splunk Add-on for AWS.  You can then build a parser for search - time and display the result in a dashboard.
Sorry for such a late reply.  You can use either DBConnect or send the rds logs via AWS Cloudwatch and bring the data to Splunk using AWS TA for Splunk using sqs based s3 inputs.
What does mongod.log say? What version of Splunk?  What version of KVStore?  Are you using mmapv1 or wiredTiger?
KV Store changed status to failed. KVStore process terminated.. 10/2/2025, 12:23:23 am Failed to start KV Store process. See mongod.log and splunkd.log for details. 10/2/2025, 12:23:23 am KV Store pr... See more...
KV Store changed status to failed. KVStore process terminated.. 10/2/2025, 12:23:23 am Failed to start KV Store process. See mongod.log and splunkd.log for details. 10/2/2025, 12:23:23 am KV Store process terminated abnormally (exit code 4, status PID 6147 killed by signal 4: Illegal instruction). See mongod.log and splunkd.log for details.     this above mention issues are showing .   #kvstore @kvstore @splunk
Hi @HaakonRuud, When mode = single, it's implied the stats setting is ignored. As a result, the event will contain an average of samples collected every samplingInterval millseconds over interval se... See more...
Hi @HaakonRuud, When mode = single, it's implied the stats setting is ignored. As a result, the event will contain an average of samples collected every samplingInterval millseconds over interval seconds. Your collection interval is 60 seconds (1 minute): interval = 60 Your mstats time span is also 1 minute: span=1m As a result, you only have 1 event per time interval, so the mean and maximum will be equivalent.
Hi @madhav_dholakia, Can you provide more context for the defaults object and a small sample dashboard that doesn't save correctly? Is defaults defined at the top level of the dashboard? I.e.: { ... See more...
Hi @madhav_dholakia, Can you provide more context for the defaults object and a small sample dashboard that doesn't save correctly? Is defaults defined at the top level of the dashboard? I.e.: { "visualizations": { ... }, "dataSources": { ... }, "defaults": { ... }, ... }  
Hi @splunk_user_99, Which version of MLTK do you have installed? The underlying API uses a simple payload: {"name":"my_model","app":"search"} where name is the value entered in New Main Model Tit... See more...
Hi @splunk_user_99, Which version of MLTK do you have installed? The underlying API uses a simple payload: {"name":"my_model","app":"search"} where name is the value entered in New Main Model Title and app is derived from the value selected in Destination App. The app list is loaded when Operationalize is clicked and sorted alphabetically by display name. On submit, the request payload is checked to verify that it contains only the 'app' and 'name' keys. Do you have the same issue in a sandboxed (private, incognito, etc.) browser session with extensions disabled?
Hi @splunklearner , I guess the answer really is "it depends" however in this scenario we are overwriting the original data with just the JSON, rather than adding an additional extracted field.  Se... See more...
Hi @splunklearner , I guess the answer really is "it depends" however in this scenario we are overwriting the original data with just the JSON, rather than adding an additional extracted field.  Search time field extractions/eval/changes are executed every time you search the data, and in some cases need to be evaluated before the search is filtered down. For example if you search for "uri=/test" then you may find that at search time it needs to process all events to determine the uri field for each event, before it can then filter down. Being able to search against the URI without having to do any modification to every event means it should be faster.  The disadvantage of index-time extractions is that it doesnt apply retrospectively to data you already have,  whereas search time will apply to everything currently indexed.
I have identified that aggqueue and tcpout_Default_autolb_group queue is having most issue which addregator process and one sourcetype have most cpu utilization, no how can i fix this
Hi @kiran_panchavat , This is present in my current props.conf which is there is Cluster Manager for this sourcetype (which is copied from other sourcetype)-- [sony_waf] TIME_PREFIX = ^ MAX_TIMES... See more...
Hi @kiran_panchavat , This is present in my current props.conf which is there is Cluster Manager for this sourcetype (which is copied from other sourcetype)-- [sony_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g SEDCMD-formatxml =s/></>\n</g LINE_BREAKER = ([\r\n]+)[A-Z][a-z] {2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 Now do I need to add here in this props.conf and push it to indexers? Or create new props.conf in Deployer which includes your props.conf stanza and push it to search heads?
@Nawab  These are the 4 main scenarios I would imagine in a simple forwarder-receiver topology: A. forwarder is crashing, while it is unable to forward data to the receiver (regardless if it's due ... See more...
@Nawab  These are the 4 main scenarios I would imagine in a simple forwarder-receiver topology: A. forwarder is crashing, while it is unable to forward data to the receiver (regardless if it's due to unreachable receiver, network issues or incorrect/missing outputs.conf or alike): in-memory data will not be moved into the persistent queue, even if the persistent queue still has got enough space to accomodate the in-memory queue data. B. forwarder is gracefully shut down, while it is unable to forward data to the receiver (regardless if it's due to unreachable receiver, network issues or incorrect/missing outputs.conf or alike): in-memory data will not be moved into the persistent queue, even if the persistent queue still has got enough space to accomodate the in-memory queue data. C. forwarder is crashing, but has been able to forward data to the receiver so far: persistent queue data will be preserved on disk, however in-memory data is very likely to be lost. D. forwarder is gracefully shut down, but has been able to forward data to the receiver so far: both persistent queue and in-memory data will be forwarded (and indexed) before the forwarder is fully shut-down.