All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am displaying a table as a result from the Search, however I would like to add an additional column with static values based on the existing column. For example, S.No    Name    Dept 1    ... See more...
Hi, I am displaying a table as a result from the Search, however I would like to add an additional column with static values based on the existing column. For example, S.No    Name    Dept 1          Andy      IT 2          Chris      Bus 3          Nike        Pay   In the above table, I would like to add another column called Company and map value based on Dept column as below If Dept is IT, then the value for Company as XXXX If Dept is Bus, then the value for Company is YYYY If Dept is Pay, then the value for Company is ZZZZ and the final table should look like S.No    Name    Dept    Comp 1          Andy      IT           XXXX 2          Chris      Bus       YYYY 3          Nike        Pay       ZZZZ   @ITWhisperer Dashboard 
Here's the context, I've created a splunk add-on app in splunk enterprise trial version and after creating it and creating also the input using modular python code and API as source, I use the valida... See more...
Here's the context, I've created a splunk add-on app in splunk enterprise trial version and after creating it and creating also the input using modular python code and API as source, I use the validation&package then downloaded the package to get a .spl file, after getting the spl file I uploaded it in a splunk cloud environment and it pushes through without error but have warnings which is it let me push to install the uploaded app, then after installing and restarted the cloud environment I created an input using the installed app and created a new index for it, and run the search index, after that after waiting it to generate more events based on the interval I set because its interval is 10mins, it shows the warning below after incrementing10mins as time passes by. As my thought the events are redirected in the lastchanceindex, but when I try creating an input and index in the splunk enterprise version where I created the app it generates accordingly and doesn't redirect events in the lastchanceindex. In this scenario what could be the issue and how to solve it? I've been checking other questions here in the community and I think there's none related to this scenario. I hope someone could help. Thanks! "Search peer idx-i-0c2xxxxxxxxxx1d15.xxxxxx-xxxxxxxx.splunkcloud.com has the following message: Redirected event for unconfigured/disabled/deleted index=xxx with source="xxx" host="host::xxx" sourcetype="sourcetype::xxx" into the LastChanceIndex. So far received events from 15 missing index(es)."
Hi Everyone, i got error since i try install new agent in new server using SplunkForwarder.  For inputs.conf i use like this    [WinEventLog://Security] disabled = 0 index = windows sourcetype = ... See more...
Hi Everyone, i got error since i try install new agent in new server using SplunkForwarder.  For inputs.conf i use like this    [WinEventLog://Security] disabled = 0 index = windows sourcetype = Wineventlog:Security [WinEventLog://System] disabled = 0 index = windows sourcetype = Wineventlog:System [WinEventLog://Microsoft-Windows-PowerShell/Operational] disabled = 0 index = windows sourcetype = WinEventLog:PowerShell   And the preview is like this in source = C:\Windows\System32\winevt\Logs\Microsoft-Windows-WFP%4Operational.evtx  This is not my first time to ingest windows, but this error just happen to me right now. And i confuse how to solved it.  
I have the following log from splunk where i want to extract names and their respective ids. Please help with the splunk search to print the room names along with dedup ids. Log Event TIME:10/Feb... See more...
I have the following log from splunk where i want to extract names and their respective ids. Please help with the splunk search to print the room names along with dedup ids. Log Event TIME:10/Feb/2025:03:08:17 -0800 TYPE:INFO APP_NAME:ROOM_LOOKUP_JOBS APP_BUILD_VERSION:NOT_DEFINED CLIENT_IP:100.102.16.183 CLIENT_USER_AGENT:Unknown Browser CLIENT_OS_N_DEVICE:Unknown OS Unknow Devices CLIENT_REQUEST_METHOD:GET CLIENT_REQUEST_URI:/supporting-apps/room-lookup-job/index.php CLIENT_REQUEST_TYPE:HttpRequest CLIENT_REQUEST_CONTENT_SIZE:0 SERVER_HOST_NAME:roomlookupjob-prod.us-west-2i.app.apple.com SERVER_CONTAINER_ID:roomlookupjob-prod-5d96c45c64-w4q79 REQUEST_UNIQUE_ID:Z6neG-5vAofNnSWuA5msAQAAAAA MESSAGE="Rooms successfully updated for building - IL01: [{\"name\":\"Chiaroscuro (B277) [AVCN] (3) {R} IL01 2nd\",\"id\":\"6C30AF02-5900-480C-873F-8B0763DE95F8\"},{\"name\":\"2-Pop (N221) [AVCN] (8) {R} IL01 2nd\",\"id\":\"7853CB27-A083-454F-90A6-006854396AD1\"},{\"name\":\"Bonk (B380) [AVCN] (3) {R} IL01 3rd\",\"id\":\"88AF6D48-F930-4A98-9171-BE1FAAF0E36D\"},{\"name\":\"Montage (D203) [AVCN] (7) {R} IL01 2nd\",\"id\":\"29C44E4D-8628-4815-9AB8-CF49682A9EDC\"},{\"name\":\"Cougar - Interview Room Only (B138) (4) {R} IL01 1st\",\"id\":\"D1F40F0F-E40D-46B3-BD62-2C9A054E9E70\"},{\"name\":\"Iceman - Interview Room Only (B140) (3) {R} IL01 1st\",\"id\":\"38348FD5-021A-466E-A860-0A45CA9CD18F\"},{\"name\":\"Merlin - Interview Room Only (B136) (2) {R} IL01 1st\",\"id\":\"51211C55-94EA-4B38-97B6-2EB20369FDAF\"},{\"name\":\"Viper - Interview Room Only (B134) (10) {R} IL01 1st\",\"id\":\"940E9844-49BF-4B4E-B114-A2D734203C37\"},{\"name\":\"Maverick - Interview Room Only (B142) (4) {R} IL01 1st\",\"id\":\"6D29660F-09C3-4634-8DE5-0ECFAA5639DB\"},{\"name\":\"Vignette (R278) [AVCN] (12) {R} IL01 2nd\",\"id\":\"00265678-8775-4E95-A7CA-8454AD35C4A4\"},{\"name\":\"Broom Wagon (A317) [AVCN] (14) {R} IL01 3rd\",\"id\":\"1D1EB626-C5D2-4289-B5DA-A7F6EAA79AE8\"},{\"name\":\"Jump Cut (D211) [AVCN] (22) {R} IL01 2nd\",\"id\":\"66FF42BA-3ED6-48E6-886D-08CE18124110\"},{\"name\":\"{M} The Roundhouse (P404) (6) {R} IL01 4th\",\"id\":\"2477B40A-97BF-E2C7-4908-EF5D172D5DD3\"},{\"name\":\"Corncob (S323) [AVCN] (7) {R} IL01 3rd\",\"id\":\"F01706E7-F19B-3035-CEF4-4D13FC792B0E\"},{\"name\":\"Rouleur (Q311) [AVCN] (14) {R} IL01 3rd\",\"id\":\"D96D16CE-557E-90A0-AF65-9FCAAE406659\"},{\"name\":\"Field Sprint (S341) [AVCN] (13) {R} IL01 3rd\",\"id\":\"DA59EAC2-8491-3EE2-9B78-A54E5A3FE704\"},{\"name\":\"{M} Storyboard (C218) [AVCN] (27) {R} IL01 2nd\",\"id\":\"45C4588D-0CB5-D035-5C2E-517477B1D7CB\"},{\"name\":\"Zoetrope (S241) [AVCN] (8) {R} IL01 2nd\",\"id\":\"58750290-4C79-9AFB-B277-BDE5A219D0E5\"},{\"name\":\"Sizzle Reel (P248) [AVCN] (8) {R} IL01 2nd\",\"id\":\"DF8004E6-25B8-3B18-794D-253D83FE1279\"},{\"name\":\"Rough Cut (N213) [AVCN] (7) {R} IL01 2nd\",\"id\":\"A3792CEC-BF73-F207-DB06-3884D1042C80\"}]" index=roomlookup_prod | search "Rooms successfully updated for building - IL01" Expected results: name id Chiaroscuro (B277) [AVCN] (3) {R} IL01 2nd 6C30AF02-5900-480C-873F-8B0763DE95F8 2-Pop (N221) [AVCN] (8) {R} IL01 2nd  7853CB27-A083-454F-90A6-006854396AD1 and so on..
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this.... See more...
Team, when we search by http code 500 internal server error in the Splunk is working fine. the same query which we use it in python script. we dont get any results. could you please help me on this. Thanks
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding ... See more...
Hi team I have been working on assigning a custom urgency level to all notables triggered through our correlation searches using  (ES). Specifically, I aimed to set the severity to "high" by adding eval severity=high in each relevant search. However, despite implementing this change, some of the notables are still being categorized as "medium."   Could you please assist with identifying what might be causing this discrepancy and suggest any additional steps required to ensure all triggered notables reflect the intended high urgency level?   Thank you for your assistance
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside p... See more...
Hi, I want to ignore below line inside splunk alerts payload if email address is not provided buy user. "action.email.to": {email},   What is best way to do this and if /else statement inside payload throws syntax error.
KV Store changed status to failed. KVStore process terminated.. 10/2/2025, 12:23:23 am Failed to start KV Store process. See mongod.log and splunkd.log for details. 10/2/2025, 12:23:23 am KV Store pr... See more...
KV Store changed status to failed. KVStore process terminated.. 10/2/2025, 12:23:23 am Failed to start KV Store process. See mongod.log and splunkd.log for details. 10/2/2025, 12:23:23 am KV Store process terminated abnormally (exit code 4, status PID 6147 killed by signal 4: Illegal instruction). See mongod.log and splunkd.log for details.     this above mention issues are showing .   #kvstore @kvstore @splunk
How to install config explorer application on clustered enviornment? We have deployment server, cluster manager, deployer. We push apps through deployment server to CM and deployer. Please tell me... See more...
How to install config explorer application on clustered enviornment? We have deployment server, cluster manager, deployer. We push apps through deployment server to CM and deployer. Please tell me how to do this by downloading from Splunkbase?
We have an environment where Splunk UF sends logs to HF and mostly UFs are stuck even HF and indexers are up, we need to restart the UFs to again send logs. Why uf are stuck even if indexer or HF is ... See more...
We have an environment where Splunk UF sends logs to HF and mostly UFs are stuck even HF and indexers are up, we need to restart the UFs to again send logs. Why uf are stuck even if indexer or HF is not available. CPU and RAM utilization is normal on server.
Hello all, Currently we have following event which contains both json and non json data. Please help me in removing this non-json part and where I need to give indexed_extractuons or KV_mode effecti... See more...
Hello all, Currently we have following event which contains both json and non json data. Please help me in removing this non-json part and where I need to give indexed_extractuons or KV_mode effectively to auto extract all json fields. Nov 9 17:34:28 128.160.82.28 [local0.warning] <132>1 2024-11-09T17:34:28.436542Z AviVantage v-epswafhic2-wdc.hc.cloud.uk.hc-443 NILVALUE NILVALUE - {"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-4583863f-48a3-42b9-8115-252a7fb487f5","report_timestamp":"2024-11-09T17:34:28.436542Z","service_engine":"GB-DRN-AB-Tier2-se-vxeuz","vcpu_id":0,"log_id":10181,"client_ip":"128.12.73.92","client_src_port":44908,"client_dest_port":443,"client_rtt":1,"http_version":"1.1","method":"HEAD","uri_path":"/path/to/monitor/page/","host":"udg1704n01.hc.cloud.uk.hc","response_content_type":"text/html","request_length":93,"response_length":94,"response_code":400,"response_time_first_byte":1,"response_time_last_byte":1,"compression_percentage":0,"compression":"","client_insights":"","request_headers":3,"response_headers":12,"request_state":"AVI_HTTP_REQUEST_STATE_READ_CLIENT_REQ_HDR","significant_log":["ADF_HTTP_BAD_REQUEST_PLAIN_HTTP_REQUEST_SENT_ON_HTTPS_PORT","ADF_RESPONSE_CODE_4XX"],"vs_ip":"128.160.71.14","request_id":"61e-RDl6-OZgZ","max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":1,"source_ip":"128.12.73.92","vs_name":"v-epswafhic2-wdc.hc.cloud.uk.hc-443","tenant_name":"admin"} And where I need to give these configurations?  We have syslog servers with UF installed and that send data to our deployment server. DS will push apps to master and deployer from there pushing will be done.  As of now we have props.conf in master which will push to indexers.
Hi. I am new to Splunk and SentinelOne. Here is what I've done so far: I need to forward logs from SentinelOne to a single Splunk instance. Since it is a single instance, I installed the Splunk CIM ... See more...
Hi. I am new to Splunk and SentinelOne. Here is what I've done so far: I need to forward logs from SentinelOne to a single Splunk instance. Since it is a single instance, I installed the Splunk CIM Add-on and the SentinelOne App. (which is mentioned in the Installation of the app. https://splunkbase.splunk.com/app/5433 ) In the SentinelOne App of the Splunk instance, I changed the search index to sentinelone in Application Configuration. I already created the index for testing purpose. In the API configuration, I added the url which is xxx-xxx-xxx.sentinelone.net and the api token. It is generated by adding a new service user in SentinelOne and clicking generate API token. The scope is global. I am not sure if its the correct API token. Moreover, I am not sure which channel I need to pick in SentinelOne inputs in Application Configuration(SentineOne App), such as Agents/Activities/Applications etc. How do I know which channel do i need to forward or i just add all channels? Clicking the application health overview, there is no data ingest of items. Using this SPL index=_internal sourcetype="sentinelone*" sourcetype="sentinelone:modularinput" does not show any action=saving_checkpoint, which means no data. Any help/documentation for the setup would be helpful. I would like to know the reason for no data and how to fix it. Thank you.
When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main archi... See more...
When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main architectural framework, I'm taking this approach because the data follows different distributions under these two scenarios." When using this package in Jupyter Notebook, I'm using Python to apply different models to the data based on whether it's during working hours or not. Although I'm using autoencoder as the main architectural framework, I'm taking this approach because the data follows different distributions under these two scenarios." Are there any other better approaches
Team, I'm trying to push Jenkins Build Logs to Splunk.   Installed Splunk Plugin (1.10.1) in my Cloudbees Jenkins. Configured HTTP host,  port & token - Tested Connection and it looks good.   In... See more...
Team, I'm trying to push Jenkins Build Logs to Splunk.   Installed Splunk Plugin (1.10.1) in my Cloudbees Jenkins. Configured HTTP host,  port & token - Tested Connection and it looks good.   In Splunk, created a HEC Input in the below file with the content as below File name :  /opt/app/splunk/etc/apps/splunk_httpinput/local/inputs.conf   [http://jenkins_build_logs] description = Jenkins build Logs disabled = 0 index = infra indexes = infra sourcetype = jenkins:build token =  useACK = 0   Getting the below error in the Splunk logs -  /opt/app/splunk/var/log/splunk 02-08-2025 04:52:07.704 +0000 ERROR HttpInputDataHandler [17467 HttpDedicatedIoThread-1] - Failed processing http input, token name=jenkins_build_logs, channel=n/a, source_IP=10.212.102.217, reply=7, status_message="Incorrect index", status=400, events_processed=1, http_input_body_size=381, parsing_err="invalid_index='jenkins_console'" 02-08-2025 04:54:14.617 +0000 ERROR HttpInputDataHandler [17467 HttpDedicatedIoThread-1] - Failed processing http input, token name=jenkins_build_logs, channel=n/a, source_IP=10.212.100.150, reply=7, status_message="Incorrect index", status=400, events_processed=1, http_input_body_size=317, parsing_err="invalid_index='jenkins_statistics'"
Hi there, i am new to this community but i want to understand how to purchase splunk ITSI , i already splunk Enterprise  license(purchased from aws marketplace) and free both . long back i have us... See more...
Hi there, i am new to this community but i want to understand how to purchase splunk ITSI , i already splunk Enterprise  license(purchased from aws marketplace) and free both . long back i have used splunk itsi for free with enterprise license but it need some auth and saying my user is not listed in autorized list while downloading ITSI please do help me for the same 
Hi guys!  I am getting the following error message when trying to publishing a model which I created in "Experiments". I do not know what this should mean? Does anyone has an idea?    I wou... See more...
Hi guys!  I am getting the following error message when trying to publishing a model which I created in "Experiments". I do not know what this should mean? Does anyone has an idea?    I would appreciate your help!    Best regards!
Hello, I am trying to find a way to report on all Applied Group Policy Objects for all of our domain joined computers. This would be similar to running the following command:         gpresult /r /s... See more...
Hello, I am trying to find a way to report on all Applied Group Policy Objects for all of our domain joined computers. This would be similar to running the following command:         gpresult /r /scope computer Is there a way that Splunk can gather all of this information as a report. I did see there was an app called Splunk App for Windows Infrastructure but it was EOLd. Is there anything new that would audit our computers? Thanks, Charlie
Anytime I try to do anything with my deployment server I get this error: An error occurred: Could not create Splunk settings directory at '/root/.splunk' This includes the command -  ./splunk reloa... See more...
Anytime I try to do anything with my deployment server I get this error: An error occurred: Could not create Splunk settings directory at '/root/.splunk' This includes the command -  ./splunk reload deploy-server.  We have AWS EC2 instances hosted for all components and opening it via SSM and login via sudo -i. Tried to give sudo chown -R splunk:splunk /opt/splunk/bin.. still the same issue. And one more doubt - if we edit in etc/deployment-apps reload is enough right to distribute the updated configurations to manager? But when I restart configurations are reflecting in manager not sure why reload is throwing this error?
Hello,   I have a lookup with url like  url www.url.com .url.com site.url.com   And i try to match it with my proxy logs to check if users access it. But i have issues with ".u... See more...
Hello,   I have a lookup with url like  url www.url.com .url.com site.url.com   And i try to match it with my proxy logs to check if users access it. But i have issues with ".url.com" since it don't exactly matches the hostname. I have tried to replace them with "*.url.com" but splunk lookup don't match wildcard. I have tried things like this but nothing worked : | inputlookup all_url.csv | rename url as lookup_url | join type=inner [ search index=my-proxy | eval lookup_url="*" . lookup_url . "*" | search hostname=lookup_url ] Do you have any idea ? Thanks
I'm working to try and automate the creation of muting rules for our maintenance windows, I've been looking around to see if there is a way to use the API to create a muting rule, but I'm not finding... See more...
I'm working to try and automate the creation of muting rules for our maintenance windows, I've been looking around to see if there is a way to use the API to create a muting rule, but I'm not finding anything, does this not exist? Is there an existing integration with Service-Now that would do this that I'm just not finding? I'm hoping to tie into our change management system to have these muting windows created automatically upon approval.