All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you so much for the explanation. This make so much sense when you describe it (and something I should be able to think of my self). 
hi @BalajiRaju  Can you provide the base search you're using in Splunk and the Python code for us to see?
Here's the context, I've created a splunk add-on app in splunk enterprise trial version and after creating it and creating also the input using modular python code and API as source, I use the valida... See more...
Here's the context, I've created a splunk add-on app in splunk enterprise trial version and after creating it and creating also the input using modular python code and API as source, I use the validation&package then downloaded the package to get a .spl file, after getting the spl file I uploaded it in a splunk cloud environment and it pushes through without error but have warnings which is it let me push to install the uploaded app, then after installing and restarted the cloud environment I created an input using the installed app and created a new index for it, and run the search index, after that after waiting it to generate more events based on the interval I set because its interval is 10mins, it shows the warning below after incrementing10mins as time passes by. As my thought the events are redirected in the lastchanceindex, but when I try creating an input and index in the splunk enterprise version where I created the app it generates accordingly and doesn't redirect events in the lastchanceindex. In this scenario what could be the issue and how to solve it? I've been checking other questions here in the community and I think there's none related to this scenario. I hope someone could help. Thanks! "Search peer idx-i-0c2xxxxxxxxxx1d15.xxxxxx-xxxxxxxx.splunkcloud.com has the following message: Redirected event for unconfigured/disabled/deleted index=xxx with source="xxx" host="host::xxx" sourcetype="sourcetype::xxx" into the LastChanceIndex. So far received events from 15 missing index(es)."
Hi @Karthikeya , as you can read at https://splunkbase.splunk.com/app/4353, it isn't possible to use this app in clusters, because conf files are aligned by the Cluster Manager (Indexers Cluster) an... See more...
Hi @Karthikeya , as you can read at https://splunkbase.splunk.com/app/4353, it isn't possible to use this app in clusters, because conf files are aligned by the Cluster Manager (Indexers Cluster) and by the Deployer or the Captain (Search Head Cluster) and it isn't possible to modify conf files of one component. Ciao. Giuseppe
Hi @Nawab , you should use the sourcetypes used in the add-on. Add-on should be installed in the Forwarder used to ingest data and on the Search Heads, used for search tipe parsing activities. Cia... See more...
Hi @Nawab , you should use the sourcetypes used in the add-on. Add-on should be installed in the Forwarder used to ingest data and on the Search Heads, used for search tipe parsing activities. Ciao. Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @Skv , as I said, Splunk Forwarders (both Universal and Heavy) have a cache mechanism so, if there's no connection with the Indexers, logs are locally stored in the Forwarder until the connection... See more...
Hi @Skv , as I said, Splunk Forwarders (both Universal and Heavy) have a cache mechanism so, if there's no connection with the Indexers, logs are locally stored in the Forwarder until the connection is re-establish. Information abou how it works and how to configure these persistent queues you can see at https://docs.splunk.com/Documentation/Splunk/latest/Data/Usepersistentqueues . Ciao. Giuseppe
Hi Everyone, i got error since i try install new agent in new server using SplunkForwarder.  For inputs.conf i use like this    [WinEventLog://Security] disabled = 0 index = windows sourcetype = ... See more...
Hi Everyone, i got error since i try install new agent in new server using SplunkForwarder.  For inputs.conf i use like this    [WinEventLog://Security] disabled = 0 index = windows sourcetype = Wineventlog:Security [WinEventLog://System] disabled = 0 index = windows sourcetype = Wineventlog:System [WinEventLog://Microsoft-Windows-PowerShell/Operational] disabled = 0 index = windows sourcetype = WinEventLog:PowerShell   And the preview is like this in source = C:\Windows\System32\winevt\Logs\Microsoft-Windows-WFP%4Operational.evtx  This is not my first time to ingest windows, but this error just happen to me right now. And i confuse how to solved it.  
Hi @ws , you have many ways to check repetitive logs, the easiest is to save logs in a file with different names (e.g. adding data and time) and use the crcSalt = <SOURCE> option in the inputs.conf ... See more...
Hi @ws , you have many ways to check repetitive logs, the easiest is to save logs in a file with different names (e.g. adding data and time) and use the crcSalt = <SOURCE> option in the inputs.conf related stanza. Ciao. Giuseppe
Hi @Tajuddin , at first, to share something like log samples or code you can use the "Insert/Edit code sample" button. Anyway, this seems to be a json log, did you tried to use INDEXED_EXTRACTION=J... See more...
Hi @Tajuddin , at first, to share something like log samples or code you can use the "Insert/Edit code sample" button. Anyway, this seems to be a json log, did you tried to use INDEXED_EXTRACTION=JSON or spath command? Otherwise, it's possible to use a regex. Ciao. Giuseppe
Please note that there is actually in place of  . When i posted, it automatically converted to emoji.
I have the following log from splunk where i want to extract names and their respective ids. Please help with the splunk search to print the room names along with dedup ids. Log Event TIME:10/Feb... See more...
I have the following log from splunk where i want to extract names and their respective ids. Please help with the splunk search to print the room names along with dedup ids. Log Event TIME:10/Feb/2025:03:08:17 -0800 TYPE:INFO APP_NAME:ROOM_LOOKUP_JOBS APP_BUILD_VERSION:NOT_DEFINED CLIENT_IP:100.102.16.183 CLIENT_USER_AGENT:Unknown Browser CLIENT_OS_N_DEVICE:Unknown OS Unknow Devices CLIENT_REQUEST_METHOD:GET CLIENT_REQUEST_URI:/supporting-apps/room-lookup-job/index.php CLIENT_REQUEST_TYPE:HttpRequest CLIENT_REQUEST_CONTENT_SIZE:0 SERVER_HOST_NAME:roomlookupjob-prod.us-west-2i.app.apple.com SERVER_CONTAINER_ID:roomlookupjob-prod-5d96c45c64-w4q79 REQUEST_UNIQUE_ID:Z6neG-5vAofNnSWuA5msAQAAAAA MESSAGE="Rooms successfully updated for building - IL01: [{\"name\":\"Chiaroscuro (B277) [AVCN] (3) {R} IL01 2nd\",\"id\":\"6C30AF02-5900-480C-873F-8B0763DE95F8\"},{\"name\":\"2-Pop (N221) [AVCN] (8) {R} IL01 2nd\",\"id\":\"7853CB27-A083-454F-90A6-006854396AD1\"},{\"name\":\"Bonk (B380) [AVCN] (3) {R} IL01 3rd\",\"id\":\"88AF6D48-F930-4A98-9171-BE1FAAF0E36D\"},{\"name\":\"Montage (D203) [AVCN] (7) {R} IL01 2nd\",\"id\":\"29C44E4D-8628-4815-9AB8-CF49682A9EDC\"},{\"name\":\"Cougar - Interview Room Only (B138) (4) {R} IL01 1st\",\"id\":\"D1F40F0F-E40D-46B3-BD62-2C9A054E9E70\"},{\"name\":\"Iceman - Interview Room Only (B140) (3) {R} IL01 1st\",\"id\":\"38348FD5-021A-466E-A860-0A45CA9CD18F\"},{\"name\":\"Merlin - Interview Room Only (B136) (2) {R} IL01 1st\",\"id\":\"51211C55-94EA-4B38-97B6-2EB20369FDAF\"},{\"name\":\"Viper - Interview Room Only (B134) (10) {R} IL01 1st\",\"id\":\"940E9844-49BF-4B4E-B114-A2D734203C37\"},{\"name\":\"Maverick - Interview Room Only (B142) (4) {R} IL01 1st\",\"id\":\"6D29660F-09C3-4634-8DE5-0ECFAA5639DB\"},{\"name\":\"Vignette (R278) [AVCN] (12) {R} IL01 2nd\",\"id\":\"00265678-8775-4E95-A7CA-8454AD35C4A4\"},{\"name\":\"Broom Wagon (A317) [AVCN] (14) {R} IL01 3rd\",\"id\":\"1D1EB626-C5D2-4289-B5DA-A7F6EAA79AE8\"},{\"name\":\"Jump Cut (D211) [AVCN] (22) {R} IL01 2nd\",\"id\":\"66FF42BA-3ED6-48E6-886D-08CE18124110\"},{\"name\":\"{M} The Roundhouse (P404) (6) {R} IL01 4th\",\"id\":\"2477B40A-97BF-E2C7-4908-EF5D172D5DD3\"},{\"name\":\"Corncob (S323) [AVCN] (7) {R} IL01 3rd\",\"id\":\"F01706E7-F19B-3035-CEF4-4D13FC792B0E\"},{\"name\":\"Rouleur (Q311) [AVCN] (14) {R} IL01 3rd\",\"id\":\"D96D16CE-557E-90A0-AF65-9FCAAE406659\"},{\"name\":\"Field Sprint (S341) [AVCN] (13) {R} IL01 3rd\",\"id\":\"DA59EAC2-8491-3EE2-9B78-A54E5A3FE704\"},{\"name\":\"{M} Storyboard (C218) [AVCN] (27) {R} IL01 2nd\",\"id\":\"45C4588D-0CB5-D035-5C2E-517477B1D7CB\"},{\"name\":\"Zoetrope (S241) [AVCN] (8) {R} IL01 2nd\",\"id\":\"58750290-4C79-9AFB-B277-BDE5A219D0E5\"},{\"name\":\"Sizzle Reel (P248) [AVCN] (8) {R} IL01 2nd\",\"id\":\"DF8004E6-25B8-3B18-794D-253D83FE1279\"},{\"name\":\"Rough Cut (N213) [AVCN] (7) {R} IL01 2nd\",\"id\":\"A3792CEC-BF73-F207-DB06-3884D1042C80\"}]" index=roomlookup_prod | search "Rooms successfully updated for building - IL01" Expected results: name id Chiaroscuro (B277) [AVCN] (3) {R} IL01 2nd 6C30AF02-5900-480C-873F-8B0763DE95F8 2-Pop (N221) [AVCN] (8) {R} IL01 2nd  7853CB27-A083-454F-90A6-006854396AD1 and so on..
@splunklearner  Please check this solution.  Solved: Re: Why would INDEXED_EXTRACTIONS=JSON in props.co... - Splunk Community
@splunklearner  Verify in splunkd.log whether your Universal Forwarder (UF) or Heavy Forwarder (HF) is sending duplicate events. Check inputs.conf, make sure crcSalt = <SOURCE> is set to avoid dupl... See more...
@splunklearner  Verify in splunkd.log whether your Universal Forwarder (UF) or Heavy Forwarder (HF) is sending duplicate events. Check inputs.conf, make sure crcSalt = <SOURCE> is set to avoid duplicate ingestion.
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this.... See more...
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this. Thanks
Hi all, I have given the below stanza in props.conf and pushed to indexers. Fields are being extracted in json but logs are getting duplicated. Please help me. [sony_waf] TIME_PREFIX = ^ MAX_TIM... See more...
Hi all, I have given the below stanza in props.conf and pushed to indexers. Fields are being extracted in json but logs are getting duplicated. Please help me. [sony_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S LINE_BREAKER=([\r\n]+) pulldown_type=true SEDCMD-removeheader=s/^[^\{]*//g SHOULD_LINEMERGE=false TRUNCATE = 20000 KV_MODE=json AUTO_KV_JSON=true
Thank you for the suggestion. We were able to restore kvstore even without changing the dynamic captain  
@dy1  See the status of the KV store by using the following command. /opt/splunk/bin/splunk show kvstore-status -auth <user_name>:<password> Review the mongod.log and splunkd.log files for more de... See more...
@dy1  See the status of the KV store by using the following command. /opt/splunk/bin/splunk show kvstore-status -auth <user_name>:<password> Review the mongod.log and splunkd.log files for more detailed error messages. If there's a lock file causing the issue, you can remove it: sudo rm -rf /xxx/kvstore/mongo/mongod.lock Renaming the current MongoDB folder can help reset the KV Store. mv $SPLUNK_HOME/var/lib/splunk/kvstore/mongo $SPLUNK_HOME/var/lib/splunk/kvstore/mongo.old Steps:- Stop Splunk Rename the current mongo folder to old Start Splunk And you will see a new Mongo folder created with all the components.
Understand Splunk will perform a check of the event at 256 chars if they are the same.   But at my current situation, would your recommendation be that we need to customize the application to imple... See more...
Understand Splunk will perform a check of the event at 256 chars if they are the same.   But at my current situation, would your recommendation be that we need to customize the application to implement a checkpoint mechanism for tracking previously indexed records?
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding ... See more...
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding eval severity=high in each relevant search. However, despite implementing this change, some of the notables are still being categorized as "medium."   Could you please assist with identifying what might be causing this discrepancy and suggest any additional steps required to ensure all triggered notables reflect the intended high urgency level?   Thank you for your assistance
Currently, we are not focusing on searches but rather on the application created to pull data from the API provided by the destination party. Based on my understanding of the current setup, the new ... See more...
Currently, we are not focusing on searches but rather on the application created to pull data from the API provided by the destination party. Based on my understanding of the current setup, the new data is being retrieved by the application through the destination API. The data includes fields such as ID, case status, case close date, and others. At this point, duplicates will be identified based on the ID field.   Please correct me if I'm wrong, but given the current setup, wouldn't this result in duplicate data? Since we are calling at the interval of 1 hours and 4 hours duration of logs. For example: 10am, 6am-10am 11am, 11am-3pm