All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ws , you have many ways to check repetitive logs, the easiest is to save logs in a file with different names (e.g. adding data and time) and use the crcSalt = <SOURCE> option in the inputs.conf ... See more...
Hi @ws , you have many ways to check repetitive logs, the easiest is to save logs in a file with different names (e.g. adding data and time) and use the crcSalt = <SOURCE> option in the inputs.conf related stanza. Ciao. Giuseppe
Hi @Tajuddin , at first, to share something like log samples or code you can use the "Insert/Edit code sample" button. Anyway, this seems to be a json log, did you tried to use INDEXED_EXTRACTION=J... See more...
Hi @Tajuddin , at first, to share something like log samples or code you can use the "Insert/Edit code sample" button. Anyway, this seems to be a json log, did you tried to use INDEXED_EXTRACTION=JSON or spath command? Otherwise, it's possible to use a regex. Ciao. Giuseppe
Please note that there is actually in place of  . When i posted, it automatically converted to emoji.
I have the following log from splunk where i want to extract names and their respective ids. Please help with the splunk search to print the room names along with dedup ids. Log Event TIME:10/Feb... See more...
I have the following log from splunk where i want to extract names and their respective ids. Please help with the splunk search to print the room names along with dedup ids. Log Event TIME:10/Feb/2025:03:08:17 -0800 TYPE:INFO APP_NAME:ROOM_LOOKUP_JOBS APP_BUILD_VERSION:NOT_DEFINED CLIENT_IP:100.102.16.183 CLIENT_USER_AGENT:Unknown Browser CLIENT_OS_N_DEVICE:Unknown OS Unknow Devices CLIENT_REQUEST_METHOD:GET CLIENT_REQUEST_URI:/supporting-apps/room-lookup-job/index.php CLIENT_REQUEST_TYPE:HttpRequest CLIENT_REQUEST_CONTENT_SIZE:0 SERVER_HOST_NAME:roomlookupjob-prod.us-west-2i.app.apple.com SERVER_CONTAINER_ID:roomlookupjob-prod-5d96c45c64-w4q79 REQUEST_UNIQUE_ID:Z6neG-5vAofNnSWuA5msAQAAAAA MESSAGE="Rooms successfully updated for building - IL01: [{\"name\":\"Chiaroscuro (B277) [AVCN] (3) {R} IL01 2nd\",\"id\":\"6C30AF02-5900-480C-873F-8B0763DE95F8\"},{\"name\":\"2-Pop (N221) [AVCN] (8) {R} IL01 2nd\",\"id\":\"7853CB27-A083-454F-90A6-006854396AD1\"},{\"name\":\"Bonk (B380) [AVCN] (3) {R} IL01 3rd\",\"id\":\"88AF6D48-F930-4A98-9171-BE1FAAF0E36D\"},{\"name\":\"Montage (D203) [AVCN] (7) {R} IL01 2nd\",\"id\":\"29C44E4D-8628-4815-9AB8-CF49682A9EDC\"},{\"name\":\"Cougar - Interview Room Only (B138) (4) {R} IL01 1st\",\"id\":\"D1F40F0F-E40D-46B3-BD62-2C9A054E9E70\"},{\"name\":\"Iceman - Interview Room Only (B140) (3) {R} IL01 1st\",\"id\":\"38348FD5-021A-466E-A860-0A45CA9CD18F\"},{\"name\":\"Merlin - Interview Room Only (B136) (2) {R} IL01 1st\",\"id\":\"51211C55-94EA-4B38-97B6-2EB20369FDAF\"},{\"name\":\"Viper - Interview Room Only (B134) (10) {R} IL01 1st\",\"id\":\"940E9844-49BF-4B4E-B114-A2D734203C37\"},{\"name\":\"Maverick - Interview Room Only (B142) (4) {R} IL01 1st\",\"id\":\"6D29660F-09C3-4634-8DE5-0ECFAA5639DB\"},{\"name\":\"Vignette (R278) [AVCN] (12) {R} IL01 2nd\",\"id\":\"00265678-8775-4E95-A7CA-8454AD35C4A4\"},{\"name\":\"Broom Wagon (A317) [AVCN] (14) {R} IL01 3rd\",\"id\":\"1D1EB626-C5D2-4289-B5DA-A7F6EAA79AE8\"},{\"name\":\"Jump Cut (D211) [AVCN] (22) {R} IL01 2nd\",\"id\":\"66FF42BA-3ED6-48E6-886D-08CE18124110\"},{\"name\":\"{M} The Roundhouse (P404) (6) {R} IL01 4th\",\"id\":\"2477B40A-97BF-E2C7-4908-EF5D172D5DD3\"},{\"name\":\"Corncob (S323) [AVCN] (7) {R} IL01 3rd\",\"id\":\"F01706E7-F19B-3035-CEF4-4D13FC792B0E\"},{\"name\":\"Rouleur (Q311) [AVCN] (14) {R} IL01 3rd\",\"id\":\"D96D16CE-557E-90A0-AF65-9FCAAE406659\"},{\"name\":\"Field Sprint (S341) [AVCN] (13) {R} IL01 3rd\",\"id\":\"DA59EAC2-8491-3EE2-9B78-A54E5A3FE704\"},{\"name\":\"{M} Storyboard (C218) [AVCN] (27) {R} IL01 2nd\",\"id\":\"45C4588D-0CB5-D035-5C2E-517477B1D7CB\"},{\"name\":\"Zoetrope (S241) [AVCN] (8) {R} IL01 2nd\",\"id\":\"58750290-4C79-9AFB-B277-BDE5A219D0E5\"},{\"name\":\"Sizzle Reel (P248) [AVCN] (8) {R} IL01 2nd\",\"id\":\"DF8004E6-25B8-3B18-794D-253D83FE1279\"},{\"name\":\"Rough Cut (N213) [AVCN] (7) {R} IL01 2nd\",\"id\":\"A3792CEC-BF73-F207-DB06-3884D1042C80\"}]" index=roomlookup_prod | search "Rooms successfully updated for building - IL01" Expected results: name id Chiaroscuro (B277) [AVCN] (3) {R} IL01 2nd 6C30AF02-5900-480C-873F-8B0763DE95F8 2-Pop (N221) [AVCN] (8) {R} IL01 2nd  7853CB27-A083-454F-90A6-006854396AD1 and so on..
@splunklearner  Please check this solution.  Solved: Re: Why would INDEXED_EXTRACTIONS=JSON in props.co... - Splunk Community
@splunklearner  Verify in splunkd.log whether your Universal Forwarder (UF) or Heavy Forwarder (HF) is sending duplicate events. Check inputs.conf, make sure crcSalt = <SOURCE> is set to avoid dupl... See more...
@splunklearner  Verify in splunkd.log whether your Universal Forwarder (UF) or Heavy Forwarder (HF) is sending duplicate events. Check inputs.conf, make sure crcSalt = <SOURCE> is set to avoid duplicate ingestion.
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this.... See more...
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this. Thanks
Hi all, I have given the below stanza in props.conf and pushed to indexers. Fields are being extracted in json but logs are getting duplicated. Please help me. [sony_waf] TIME_PREFIX = ^ MAX_TIM... See more...
Hi all, I have given the below stanza in props.conf and pushed to indexers. Fields are being extracted in json but logs are getting duplicated. Please help me. [sony_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) pulldown_type=true SEDCMD-removeheader=s/^[^\{]*//g SHOULD_LINEMERGE=false TRUNCATE = 20000 KV_MODE=json AUTO_KV_JSON=true
Thank you for the suggestion. We were able to restore kvstore even without changing the dynamic captain  
@dy1  See the status of the KV store by using the following command. /opt/splunk/bin/splunk show kvstore-status -auth <user_name>:<password> Review the mongod.log and splunkd.log files for more de... See more...
@dy1  See the status of the KV store by using the following command. /opt/splunk/bin/splunk show kvstore-status -auth <user_name>:<password> Review the mongod.log and splunkd.log files for more detailed error messages. If there's a lock file causing the issue, you can remove it: sudo rm -rf /xxx/kvstore/mongo/mongod.lock Renaming the current MongoDB folder can help reset the KV Store. mv $SPLUNK_HOME/var/lib/splunk/kvstore/mongo $SPLUNK_HOME/var/lib/splunk/kvstore/mongo.old Steps:- Stop Splunk Rename the current mongo folder to old Start Splunk And you will see a new Mongo folder created with all the components.
Understand Splunk will perform a check of the event at 256 chars if they are the same.   But at my current situation, would your recommendation be that we need to customize the application to imple... See more...
Understand Splunk will perform a check of the event at 256 chars if they are the same.   But at my current situation, would your recommendation be that we need to customize the application to implement a checkpoint mechanism for tracking previously indexed records?
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding ... See more...
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding eval severity=high in each relevant search. However, despite implementing this change, some of the notables are still being categorized as "medium."   Could you please assist with identifying what might be causing this discrepancy and suggest any additional steps required to ensure all triggered notables reflect the intended high urgency level?   Thank you for your assistance
Currently, we are not focusing on searches but rather on the application created to pull data from the API provided by the destination party. Based on my understanding of the current setup, the new ... See more...
Currently, we are not focusing on searches but rather on the application created to pull data from the API provided by the destination party. Based on my understanding of the current setup, the new data is being retrieved by the application through the destination API. The data includes fields such as ID, case status, case close date, and others. At this point, duplicates will be identified based on the ID field.   Please correct me if I'm wrong, but given the current setup, wouldn't this result in duplicate data? Since we are calling at the interval of 1 hours and 4 hours duration of logs. For example: 10am, 6am-10am 11am, 11am-3pm
@gcuselloalso could you explain this in detail - (if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until... See more...
@gcuselloalso could you explain this in detail - (if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed.)
Just wanted to put add a xpath command solution that also works, simply as a future reference for users that can go with the spath command solution. | makeresults | eval _raw="<?xml version=\"1.... See more...
Just wanted to put add a xpath command solution that also works, simply as a future reference for users that can go with the spath command solution. | makeresults | eval _raw="<?xml version=\"1.0\" encoding=\"utf-8\"?> <soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soapenv:Body> <ns3:LogResponse xmlns:ns2=\"http://randomurl.com/sample1\" xmlns:ns3=\"http://randomurl.com/sample2\"> <LogResponse > <ResponseCode>OK</ResponseCode> <State>Simple</State> <Transactions> <TransactionName>CHANGED</TransactionName> </Transactions> <Transactions> <TransactionData>CHANGE_SIMPLE</TransactionData> </Transactions> <ServerTime>1649691711637</ServerTime> <SimpleResponseCode>OK</SimpleResponseCode> <nResponseCode> <nResponseCode>OK</nResponseCode> </nResponseCode> <USELESS>VALUES</USELESS> <MORE_USELESS>false</MORE_USELESS> </LogResponse> </ns3:LogResponse> </soapenv:Body> </soapenv:Envelope>" | eval xml=replace(_raw, "^<\?xml.+\?>[\r\n]*", "") ``` xpath does not like ?xml encoding version and text declaration, so remove``` | xpath field=xml outfield=ResponseCode "//*[local-name()='ResponseCode']" ``` use *[local-name()='<value>' to ignore namespace declarations, i.e. xmlns='smomething' ] ``` | xpath field=xml outfield=SimpleResponseCode "//*[local-name()='SimpleResponseCode']" | xpath field=xml outfield=nResponseCode "//*[local-name()='nResponseCode']/nResponseCode"  
could you explain the sloution sturcture how it works when there is no internet for 14hr in the factory data room how the logging and monitoring will work where can be stored the logs when there is ... See more...
could you explain the sloution sturcture how it works when there is no internet for 14hr in the factory data room how the logging and monitoring will work where can be stored the logs when there is no connenction  @gcusello
Hi @LinkLoop, You can verify Splunk is connected to outputs with the list forward-server command: & "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" list forward-server -auth admin:passwo... See more...
Hi @LinkLoop, You can verify Splunk is connected to outputs with the list forward-server command: & "C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe" list forward-server -auth admin:password Active forwards: splunk.example.com:9997 (ssl) Configured but inactive forwards: None The command requires authentication, so you'll need to know the local Splunk admin username and password defined at install time. If the local management port is disabled, the command will not be available. You can otherwise search local logs for forwarding activity: Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log" -Pattern "Connected to idx=" Select-String -Path "C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log" -Pattern "group=per_index_thruput, series=`"_internal`""  
Hi @MichaelM1, MaxUserPort adjusts limits on ephemeral ports. From the perspective of the intermediate forwarder, this would be the maximum port number allocated for an outbound connection to a down... See more...
Hi @MichaelM1, MaxUserPort adjusts limits on ephemeral ports. From the perspective of the intermediate forwarder, this would be the maximum port number allocated for an outbound connection to a downstream receiver. The intermediate forwarder would only listen on one port or however many input ports you have defined. TcpTimedWaitDelay adjusts the amount of time a closed socket will be held until it can be reused by another winsock client/server. As a quick test, I installed Splunk Universal Forwarder 9.4.0 for Windows on a clean install of Windows Server 2019 Datacenter Edition named win2019 with the following settings: # %SPLUNK_HOME%\etc\system\local\inputs.conf [splunktcp://9997] disabled = 0 # %SPLUNK_HOME%\etc\system\local\outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = splunk:9997 [tcpout-server://splunk:9997] where splunk is a downstream receiver. To simulate 1000+ connections, I installed Splunk Universal Forwarder 9.4.0 for Linux on a separate system with the following settings: # $SPLUNK_HOME/etc/system/local/limits.conf [thruput] maxKBps = 0 # $SPLUNK_HOME/etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = win2019:9997 [tcpout-server://win2019:9997] # $SPLUNK_HOME/etc/system/local/server.conf # additional default settings not shown [general] parallelIngestionPipelines = 2000 [queue] maxSize = 1KB [queue=AQ] maxSize = 1KB [queue=WEVT] maxSize = 1KB [queue=aggQueue] maxSize = 1KB [queue=fschangemanager_queue] maxSize = 1KB [queue=parsingQueue] maxSize = 1KB [queue=remoteOutputQueue] maxSize = 1KB [queue=rfsQueue] maxSize = 1KB [queue=vixQueue] maxSize = 1KB parallelIngestionPipelines = 2000 creates 2000 connections to win2019:9997. (Don't do this in real life. It's a Splunk instance using 2000x the resources of a typical instance. You'll consumer memory very quickly as stack space is allocated for new threads.) So far, I have no issues creating 2000 connections. Do you have a firewall or transparent proxy between forwarders and your intermediate forwarder? If yes, does the device limit the number of inbound connections per destination ip:port:protocol tuple?
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside p... See more...
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside payload throws syntax error.
Hi @antoniolamonica, Data model root search datasets start with a base search. Endpoint.Processes is: (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_in... See more...
Hi @antoniolamonica, Data model root search datasets start with a base search. Endpoint.Processes is: (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_integrity_level) This search is expanded by Splunk to include the contents of the cim_EndPoint_indexes macros and all event types that match tag=process and tag=report. To compare like for like searches, start with the same base search: (`cim_Endpoint_indexes`) tag=process tag=report earliest=-4h@h latest=-2h@h | eval process_integrity_level=lower(process_integrity_level) | stats count values(process_id) as process_id by dest and construct a similar tstats search: | tstats summariesonly=f count values(Processes.process_id) as process_id from datamodel=Endpoint.Processes where earliest=-4h@h latest=-2h@h by Processes.dest The underlying searches should be similar. Optimization may vary. You can verify the SPL in the job inspector: Job > Inspect Job > search.log and the UnifiedSearch component's log output. When summariesonly=f, the searches have similar behavior. When summariesonly=t, the data model search only looks at indexed field values. This is similar to using field::value for indexed fields and TERM() for indexed terms in a normal search.