All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@splunklearner  Please check this solution.  Solved: Re: Why would INDEXED_EXTRACTIONS=JSON in props.co... - Splunk Community
@splunklearner  Verify in splunkd.log whether your Universal Forwarder (UF) or Heavy Forwarder (HF) is sending duplicate events. Check inputs.conf, make sure crcSalt = <SOURCE> is set to avoid dupl... See more...
@splunklearner  Verify in splunkd.log whether your Universal Forwarder (UF) or Heavy Forwarder (HF) is sending duplicate events. Check inputs.conf, make sure crcSalt = <SOURCE> is set to avoid duplicate ingestion.
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this.... See more...
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this. Thanks
Hi all, I have given the below stanza in props.conf and pushed to indexers. Fields are being extracted in json but logs are getting duplicated. Please help me. [sony_waf] TIME_PREFIX = ^ MAX_TIM... See more...
Hi all, I have given the below stanza in props.conf and pushed to indexers. Fields are being extracted in json but logs are getting duplicated. Please help me. [sony_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) pulldown_type=true SEDCMD-removeheader=s/^[^\{]*//g SHOULD_LINEMERGE=false TRUNCATE = 20000 KV_MODE=json AUTO_KV_JSON=true
Thank you for the suggestion. We were able to restore kvstore even without changing the dynamic captain  
@dy1  See the status of the KV store by using the following command. /opt/splunk/bin/splunk show kvstore-status -auth <user_name>:<password> Review the mongod.log and splunkd.log files for more de... See more...
@dy1  See the status of the KV store by using the following command. /opt/splunk/bin/splunk show kvstore-status -auth <user_name>:<password> Review the mongod.log and splunkd.log files for more detailed error messages. If there's a lock file causing the issue, you can remove it: sudo rm -rf /xxx/kvstore/mongo/mongod.lock Renaming the current MongoDB folder can help reset the KV Store. mv $SPLUNK_HOME/var/lib/splunk/kvstore/mongo $SPLUNK_HOME/var/lib/splunk/kvstore/mongo.old Steps:- Stop Splunk Rename the current mongo folder to old Start Splunk And you will see a new Mongo folder created with all the components.
Understand Splunk will perform a check of the event at 256 chars if they are the same.   But at my current situation, would your recommendation be that we need to customize the application to imple... See more...
Understand Splunk will perform a check of the event at 256 chars if they are the same.   But at my current situation, would your recommendation be that we need to customize the application to implement a checkpoint mechanism for tracking previously indexed records?
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding ... See more...
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding eval severity=high in each relevant search. However, despite implementing this change, some of the notables are still being categorized as "medium."   Could you please assist with identifying what might be causing this discrepancy and suggest any additional steps required to ensure all triggered notables reflect the intended high urgency level?   Thank you for your assistance
Currently, we are not focusing on searches but rather on the application created to pull data from the API provided by the destination party. Based on my understanding of the current setup, the new ... See more...
Currently, we are not focusing on searches but rather on the application created to pull data from the API provided by the destination party. Based on my understanding of the current setup, the new data is being retrieved by the application through the destination API. The data includes fields such as ID, case status, case close date, and others. At this point, duplicates will be identified based on the ID field.   Please correct me if I'm wrong, but given the current setup, wouldn't this result in duplicate data? Since we are calling at the interval of 1 hours and 4 hours duration of logs. For example: 10am, 6am-10am 11am, 11am-3pm
@gcuselloalso could you explain this in detail - (if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until... See more...
@gcuselloalso could you explain this in detail - (if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed.)
Just wanted to put add a xpath command solution that also works, simply as a future reference for users that can go with the spath command solution. | makeresults | eval _raw="<?xml version=\"1.... See more...
Just wanted to put add a xpath command solution that also works, simply as a future reference for users that can go with the spath command solution. | makeresults | eval _raw="<?xml version=\"1.0\" encoding=\"utf-8\"?> <soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soapenv:Body> <ns3:LogResponse xmlns:ns2=\"http://randomurl.com/sample1\" xmlns:ns3=\"http://randomurl.com/sample2\"> <LogResponse > <ResponseCode>OK</ResponseCode> <State>Simple</State> <Transactions> <TransactionName>CHANGED</TransactionName> </Transactions> <Transactions> <TransactionData>CHANGE_SIMPLE</TransactionData> </Transactions> <ServerTime>1649691711637</ServerTime> <SimpleResponseCode>OK</SimpleResponseCode> <nResponseCode> <nResponseCode>OK</nResponseCode> </nResponseCode> <USELESS>VALUES</USELESS> <MORE_USELESS>false</MORE_USELESS> </LogResponse> </ns3:LogResponse> </soapenv:Body> </soapenv:Envelope>" | eval xml=replace(_raw, "^<\?xml.+\?>[\r\n]*", "") ``` xpath does not like ?xml encoding version and text declaration, so remove``` | xpath field=xml outfield=ResponseCode "//*[local-name()='ResponseCode']" ``` use *[local-name()='<value>' to ignore namespace declarations, i.e. xmlns='smomething' ] ``` | xpath field=xml outfield=SimpleResponseCode "//*[local-name()='SimpleResponseCode']" | xpath field=xml outfield=nResponseCode "//*[local-name()='nResponseCode']/nResponseCode"  
could you explain the sloution sturcture how it works when there is no internet for 14hr in the factory data room how the logging and monitoring will work where can be stored the logs when there is ... See more...
could you explain the sloution sturcture how it works when there is no internet for 14hr in the factory data room how the logging and monitoring will work where can be stored the logs when there is no connenction  @gcusello
Hi @LinkLoop, You can verify Splunk is connected to outputs with the list forward-server command: & "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" list forward-server -auth admin:passwo... See more...
Hi @LinkLoop, You can verify Splunk is connected to outputs with the list forward-server command: & "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" list forward-server -auth admin:password Active forwards: splunk.example.com:9997 (ssl) Configured but inactive forwards: None The command requires authentication, so you'll need to know the local Splunk admin username and password defined at install time. If the local management port is disabled, the command will not be available. You can otherwise search local logs for forwarding activity: Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log" -Pattern "Connected to idx=" Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log" -Pattern "group=per_index_thruput, series=`"_internal`""  
Hi @MichaelM1, MaxUserPort adjusts limits on ephemeral ports. From the perspective of the intermediate forwarder, this would be the maximum port number allocated for an outbound connection to a down... See more...
Hi @MichaelM1, MaxUserPort adjusts limits on ephemeral ports. From the perspective of the intermediate forwarder, this would be the maximum port number allocated for an outbound connection to a downstream receiver. The intermediate forwarder would only listen on one port or however many input ports you have defined. TcpTimedWaitDelay adjusts the amount of time a closed socket will be held until it can be reused by another winsock client/server. As a quick test, I installed Splunk Universal Forwarder 9.4.0 for Windows on a clean install of Windows Server 2019 Datacenter Edition named win2019 with the following settings: # %SPLUNK_HOME%\etc\system\local\inputs.conf [splunktcp://9997] disabled = 0 # %SPLUNK_HOME%\etc\system\local\outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = splunk:9997 [tcpout-server://splunk:9997] where splunk is a downstream receiver. To simulate 1000+ connections, I installed Splunk Universal Forwarder 9.4.0 for Linux on a separate system with the following settings: # $SPLUNK_HOME/etc/system/local/limits.conf [thruput] maxKBps = 0 # $SPLUNK_HOME/etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = win2019:9997 [tcpout-server://win2019:9997] # $SPLUNK_HOME/etc/system/local/server.conf # additional default settings not shown [general] parallelIngestionPipelines = 2000 [queue] maxSize = 1KB [queue=AQ] maxSize = 1KB [queue=WEVT] maxSize = 1KB [queue=aggQueue] maxSize = 1KB [queue=fschangemanager_queue] maxSize = 1KB [queue=parsingQueue] maxSize = 1KB [queue=remoteOutputQueue] maxSize = 1KB [queue=rfsQueue] maxSize = 1KB [queue=vixQueue] maxSize = 1KB parallelIngestionPipelines = 2000 creates 2000 connections to win2019:9997. (Don't do this in real life. It's a Splunk instance using 2000x the resources of a typical instance. You'll consumer memory very quickly as stack space is allocated for new threads.) So far, I have no issues creating 2000 connections. Do you have a firewall or transparent proxy between forwarders and your intermediate forwarder? If yes, does the device limit the number of inbound connections per destination ip:port:protocol tuple?
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside p... See more...
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside payload throws syntax error.
Hi @antoniolamonica, Data model root search datasets start with a base search. Endpoint.Processes is: (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_in... See more...
Hi @antoniolamonica, Data model root search datasets start with a base search. Endpoint.Processes is: (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_integrity_level) This search is expanded by Splunk to include the contents of the cim_EndPoint_indexes macros and all event types that match tag=process and tag=report. To compare like for like searches, start with the same base search: (`cim_Endpoint_indexes`) tag=process tag=report earliest=-4h@h latest=-2h@h | eval process_integrity_level=lower(process_integrity_level) | stats count values(process_id) as process_id by dest and construct a similar tstats search: | tstats summariesonly=f count values(Processes.process_id) as process_id from datamodel=Endpoint.Processes where earliest=-4h@h latest=-2h@h by Processes.dest The underlying searches should be similar. Optimization may vary. You can verify the SPL in the job inspector: Job > Inspect Job > search.log and the UnifiedSearch component's log output. When summariesonly=f, the searches have similar behavior. When summariesonly=t, the data model search only looks at indexed field values. This is similar to using field::value for indexed fields and TERM() for indexed terms in a normal search.
Hi @Rafaelled, Both parameters should work. See my previous post at https://community.splunk.com/t5/Getting-Data-In/integrating-Splunk-with-Elasticsearch/m-p/696647/highlight/true#M115609 for a few ... See more...
Hi @Rafaelled, Both parameters should work. See my previous post at https://community.splunk.com/t5/Getting-Data-In/integrating-Splunk-with-Elasticsearch/m-p/696647/highlight/true#M115609 for a few limitations. Depending on the number and size of documents you need to migrate, the app may not be appropriate. A custom REST input would give you the most flexibility with respect to the Elasticsearch Search API. There are pre-written tools like https://github.com/elasticsearch-dump/elasticsearch-dump that may help. If you have a place to host it, an instance of Logstash and a configuration that combines an elasticsearch input with an http output (for Splunk HEC) would be relatively easy to manage. If you don't have a large amount of data or if you're willing to limit yourself to 1 TB per day, a free Cribl Stream license could also do the job. I'm happy to help brainstorm relatively simple solutions here.
When trying to collect more than one field into a MV field, the problem of correlating one entry against the entries in another field can be solved in a number of ways. stats values() will always sor... See more...
When trying to collect more than one field into a MV field, the problem of correlating one entry against the entries in another field can be solved in a number of ways. stats values() will always sort/dedup the values, hence the loss of order, so using stats list() CAN be a solution if you have knowledge of your data - it will collect up to 100 max items in the list but in event sequence order, so will retain correlation between each MV field. Making composite fields is another way, as you have done with mvzip. You can make this work with any number of fields getting as complex as needed, e.g.  | eval c=mvzip(mvzip(mvzip(mvzip(A, B, "##"), C, "##"), D, "##"), E, "##") In your case, I would suggest the practive of PRE-pending time, not POST-pending, as there is an immediate benefit to the output from stats value() in that the output results will be sorted in ascending time order. It has a useful benefit in that you can use mvmap to iterate results in a known order. Also, always a good idea to remove fields BEFORE mvexpand. If you don't need a field, remove it before incurring the memory cost of mvexpand. Another improvement would be to | eval session_time=mvdedup(session_time) before you mvexpand - there's no point in expanding stuff you will discard.  
I am not sure how you would get two 'records' for the same jobname as all the aggregations you are doing are by JOBNAME. But do you mean you have two (or more) values of END_TIME for the same single ... See more...
I am not sure how you would get two 'records' for the same jobname as all the aggregations you are doing are by JOBNAME. But do you mean you have two (or more) values of END_TIME for the same single JOBNAME in the output. That would be because you are doing  | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME i.e. values(*_TIME)... is giving you all the values of the START and END TIME values. If you just want the latest END_TIME then you just need to use max and min as needed, i.e.  | stats values(Date_of_reception) as Date_of_reception max(END_TIME) as END_TIME min(START_TIME) as START_TIME by JOBNAME would give you the earliest start and the latest end. But if you have getting more than one event then any use of values will give you all values, but note also that values() will sort the values in the multivalue field, so bear that in mind.
You have two options: DBConnect - You can bring audit logs  from Oracle DBA_AUDIT_TRAIL table.  Make sure the DBA has configured to send the audit logs to DBA_AUDIT_TRAIL.  If other RDS databases ha... See more...
You have two options: DBConnect - You can bring audit logs  from Oracle DBA_AUDIT_TRAIL table.  Make sure the DBA has configured to send the audit logs to DBA_AUDIT_TRAIL.  If other RDS databases have option to store audit logs to a table/view you can also bring the logs via dbconnect. AWS RDS Logs -- Have AWS Cloudwatch collect the RDS logs put them in an S3 bucket.  Bring the RDS logs to Splunk via SQS-Based S3 inputs using Splunk Add-on for AWS.  You can then build a parser for search - time and display the result in a dashboard.