All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In the process of upgrading our splunk enterprise. Currently on version 7.1.3 (I know, super old, bear with me). Installed the Splunk Platform Readiness App v.2.2.1, set the permissions to write as t... See more...
In the process of upgrading our splunk enterprise. Currently on version 7.1.3 (I know, super old, bear with me). Installed the Splunk Platform Readiness App v.2.2.1, set the permissions to write as the documentation states. Go to launch the app and I get this error:  Error reading progress for user: <me> on host <hostname>   Dig a bit more into it and realize that the Splunk Platform Readiness App uses the KV store. Run into these errors: KV Store process terminated abnormally (exit code 14, status exited with code 14) See mongod.log and splunkd.log for details KV Store changed status to failed. KV Store process terminated. Failed to start KV Store process. See mongod.log and splunkd.log for details. *******Splunk is running on Windows Server******* I tried renaming the server.pem file in Splunk/etc/auth and restarting - it made a new server.pem file, same issues persist. Attempted to look into the mongod.log and splunkd.log but I'm not sure what I should be looking for.  Haven't yet tried to rename the mongo folder in /vat/lib/splunk/kvstore to mongo(old), as I saw that it worked for some other people with the same issue.     Did some more troubleshooting: renamed the mongo folder to mongo(old) and it recreated a new one. Same issues as before. Looked in the mongod.log file and found this: Detected unclean shutdown - C:\Program Files\Splunk\var\lib\kvstore\mongo\mongod.lock is not empty. InFile::open(), CreateFileW for C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal\lsn failed with Access is denied.
Hey there, Let me start off by saying I can delete labels if there are no assets using them. The issue originates when an asset is "using" these labels but I cannot tell how.   For some reason we ... See more...
Hey there, Let me start off by saying I can delete labels if there are no assets using them. The issue originates when an asset is "using" these labels but I cannot tell how.   For some reason we have "event" and "events" where I would like to delete the unused "event" label. But there's an asset using it. Looking under all configured assets I cannot find where the label "event" is used.   How can I accomplish my goal of finding the asset that is listed, when it's only a simple description: 1 Asset (asset name)   When looking at all my assets, only one matches. But inside this asset for the app Rest API, I can't find any mention or designation for labels whatsoever. The asset  
The goal is to get Entra logs into Splunk Cloud and alert on non-domain affiliated logins. Can't seem to find any documentation on.
Hello, I have below inputs stanza to monitor the syslog feed coming to index=base,  Now we need to filter the out with a specific host names and re route them to new index monitor:///usr/loc... See more...
Hello, I have below inputs stanza to monitor the syslog feed coming to index=base,  Now we need to filter the out with a specific host names and re route them to new index monitor:///usr/local/apps/logs/*/base_log/*/*/*/*.log] disabled = 0 sourcetype = base:syslog index = base host_segment = 9 example I have hosts  (serverxyz.myserver.com, myhostabc.myserver.com, myhostuvw.myserver.com), now i want to match *xyz* and *abc* and re route to new index. since the old config has /*/ which feeds everything to old index i wanted to add balklist to the old stanza to avoid ingesting to both index. OLD Stanza : monitor:///usr/local/apps/logs/*/base_log/*/*/*/*.log] disabled = 0 sourcetype = base:syslog index = base host_segment = 9 blacklist = (*xyz*|.*\/*abc*\/) NEW  Stanza : monitor:///usr/local/apps/logs/*/base_log/*/*/*xyz*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9 monitor:///usr/local/apps/logs/*/base_log/*/*/*abc*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9
Hello, We have a query for an alert that was working prior, but is no longer returning the correct results. We haven't changed anything on our instance, so I'm not sure as to what would be the caus... See more...
Hello, We have a query for an alert that was working prior, but is no longer returning the correct results. We haven't changed anything on our instance, so I'm not sure as to what would be the cause. Query is below (I blanked out the index names, etc of course). I tested it with an different query which is returning the expected results, but I'd like to figure out what's going on with this one. index=testindex OR index=testindex2 source="insertpath" ErrorCodesResponse=PlanInvalid | search TraceId=* | stats values(TraceId) as TraceId | mvexpand TraceId | join type=inner TraceId [search index=test ("Test SKU") | fields TraceId,@t,@mt,RequestPath] | eval date=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%Y-%m-%d"), time=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%H:%M") | table time, date, TraceId, @MT,RequestPath
Hi everyone, I am trying to create a multi KPI alert. I have tens of services with 4-5 KPIs each. Using the multi KPI alert I want to create a correlation search which can send me an email alert if ... See more...
Hi everyone, I am trying to create a multi KPI alert. I have tens of services with 4-5 KPIs each. Using the multi KPI alert I want to create a correlation search which can send me an email alert if any of the KPIs are in critical severity for more than15 minutes.  After selecting Status over time in the MultiKPI creation window, we have to set trigger for each of the KPIs.  Is there a way to set the same trigger for all the KPIs? For example if any KPI is at Critical severity level >=50% of the last 30 minutes. Seems like I am missing something, no way I have to click and set trigger for each KPI hundreds of times. Thanks!
Como criar uma busca de emprego através de uma API REST?   A ferramenta que devo usar é o Azure Data Factory para chamar uma API REST.   Estou a efetuar um POST Search com url="  https://edp.... See more...
Como criar uma busca de emprego através de uma API REST?   A ferramenta que devo usar é o Azure Data Factory para chamar uma API REST.   Estou a efetuar um POST Search com url="  https://edp.splunkcloud.com:8089/services/search/v2/jobs?output_mode=json " e body={\n \"search\": \"search%20index%3D\"oper_event_dynatrace_perf\" source=\"dynatrace_timeseries_metrics_v2://dynatrace_synthetic_browser_totalduration\"%20earliest%3D-96h}"   Na resposta ao POST a API envolve um sheduler SID que faz referência a uma pesquisa que não é o que eu coloquei no search do POST. Verifiquei no Activity>Jobs do Splunk e não foi criado nenhum Job associado ao meu search nem ao meu usuário.   Como posso construir o POST search para criar um Job do meu search através da API do Splunk ?   Entrada: { "método": "POST", "cabeçalhos": { "Tipo de conteúdo": "aplicativo/json; conjunto de caracteres=UTF-8" }, "url": " https://edp.splunkcloud.com:8089/services/search/v2/jobs?output_mode=json ", "connectVia": { "referenceName": "integrationRuntime1", "tipo": "IntegrationRuntimeReference" }, "corpo": " {\n \"pesquisar\": \"pesquisar%20índice%3D\"oper_event_dynatrace_perf\" fonte=\"dynatrace_timeseries_metrics_v2://dynatrace_synthetic_browser_totalduration\"%20mais%3D-96h}", "autenticação": { "tipo": "Básico", "nome do usuário": "saazrITAnalytD01", "senha": { "tipo": "SecureString", "valor": "***********" } } } Saída: { "ligações": {}, "origem": " https://edp.splunkcloud.com:8089/services/search/v2/jobs ", "atualizado": "2024-11-21T16:04:41Z", "gerador": { "construir": "be317eb3f944", "versão": "9.2.2406.109" }, "entrada": [ { "name": "search ```Verifique se algum dos modelos ..., "id": " https://edp.splunkcloud.com:8089/services/search/v2/jobs/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116 ", "atualizado": "2024-11-21T09:00:30.684Z", "ligações": { "alternativa": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW 9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116", "search_telemetry.json": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/search_telemetry.json", "search.log": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/search.log", "eventos": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/eventos", "resultados": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/resultados", "results_preview": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/results_preview", "linha do tempo": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/linha do tempo", "resumo": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/resumo", "controle": "/serviços/pesquisa/v2/empregos/agendador_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/controle" }, "publicado": "2024-11-21T09:00:27Z", "autor": tiago.goncalves@border-innovation.com , "contente": { "bundleVersion": "11289842698950824761", "canSummarize": falso, "cursorTime": "1970-01-01T00:00:00Z", "defaultSaveTTL": "604800", "defaultTTL": "600", "delegar": "agendador", "diskUsage": 593920, "dispatchState": "CONCLUÍDO", "feitoProgresso": 1, "contagem de gotas": 0, "earliestTime": "2024-11-21T00:00:00Z", "eventoDisponívelContagem": 0, "Contagem de eventos": 0, "eventFieldCount": 0, "eventIsStreaming": falso, "eventIsTruncated": falso, "eventSearch": "pesquisar (index=_internal ...", "eventSorting": "nenhum", "isBatchModeSearch": verdadeiro, "isDone": verdadeiro, "isEventsPreviewEnabled": falso, "isFailed": falso, "isFinalized": falso, "isPaused": falso, "isPreviewEnabled": falso, "isRealTimeSearch": falso, "isRemoteTimeline": falso, "isSaved": falso, "isSavedSearch": verdadeiro, "isTimeCursored": verdadeiro, "isZombie": falso, "is_prjob": verdadeiro, "palavras-chave": "app::aiops_storage_projection index::_internal result_count::0 \"savedsearch_name::edp aiops sp*\" search_type::scheduled source::*scheduler.log", "label": "EDP AIOPS - Falha no treino dos modelos de previsão", "latestTime": "2024-11-21T09:00:00Z", "normalizedSearch": "litsearch (índice=_interno ..., "numPreviews": 0, "optimizedSearch": "| pesquisa (índice=_internal app=..., "phase0": "litsearch (índice=_interno ..., "phase1": "addinfo tipo=contagem rótulo..., "pid": "3368900", "prioridade": 5, "proveniência": "agendador", "remoteSearch": "litsearch (índice=_interno ..., "reportSearch": "tabela _time..., "resultadoContagem": 0, "resultIsStreaming": falso, "resultPreviewCount": 0, "runDuration": 3.304000000000000003, "sampleRatio": "1", "sampleSeed": "0", "savedSearchLabel": "{\"proprietário\":\ tiago.goncalves@border-innovation.com\ ,\"app\":\"aiops_storage_projection\",\"compartilhamento\":\"app\"}", "scanCount": 10, "search": "search ```Verifique se ..., "searchCanBeEventType": falso, "searchEarliestTime": 1732147200, "pesquisarÚltimaHora": 1732179600, "searchTotalBucketsCount": 48, "searchTotalEliminatedBucketsCount": 14, "sid": "agendador_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116", "statusBuckets": 0, "ttl": 147349, ... } } } }
HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes... See more...
HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes.conf) that seems to me they are related to indexer cluster not search heads cluster, am I true? 2- what is "cim_modactions index definition is used with the common action model alerts and auditing", i didnt know the actual meaning? Splunk Common Information Model (CIM) 
We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CH... See more...
We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CHART COLOURS FOR LEGEND */ .highcharts-legend .highcharts-series-0 .highcharts-point{ fill:#28a197; } .highcharts-legend .highcharts-series-1 .highcharts-point{ fill:#f47738; } .highcharts-legend .highcharts-series-2 .highcharts-point{ fill:#6f72af; } /*BAR CHART FILL AREA */ .highcharts-series-0 .highcharts-tracker-area { fill:#28a197; stroke:#28a197;} .highcharts-series-1 .highcharts-tracker-area { fill:#f47738; stroke: #f47738;} .highcharts-series-2 .highcharts-tracker-area { fill:#6f72af; stroke: #6f72af;} /* PIE CHART COLOURS */ .highcharts-color-0 { fill: #28a197 ; } .highcharts-color-1 { fill: #f47738; } .highcharts-color-2 { fill: #6f72af; } Bar charts broke first and we found if we replaced .highcharts-tracker-area with .highcharts-point then it fixed the bars but then allowed pie charts to be only one colour
I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222" ... See more...
I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222"  but it take way to long. Is there a faster way? these number does not have a seperate fields or am i searching in any fields. im just searching for any event log that contain these number. Can anyone help? Thanks.  
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .t... See more...
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .tgz forwarder package, and extracted it to the /opt directory. Currently, the folder /opt/splunkforwarder is created, and its contents are accessible. I have navigated to the /opt/splunkforwarder/bin directory, and now I want to execute a single command to: Agree to the license without prompts, and Set the admin username and password. I found a reference for a similar approach in Windows, where the following command is used: msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet However, I couldn't find a single equivalent command for Linux that accomplishes all these steps together. Could you please provide the exact command to achieve this on Linux?  
Hello I want to see all indexes latest data time. like when did the latest data came to this index.
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this... See more...
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this value at the latest with the STATUS value "FINISHED". However, when the events come into Splunk, getting the values with the STATUS value "RELEASED" without latest timestamp (UPDATED) What I am doing so far: I am using rising column method to get the data into Splunk to avoid duplicate in ingestion.    
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=oute... See more...
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=outer url [| inputlookup test2.csv | eval is_test2_log=1] | where isnull(is_test2_log)     The two results are different and the actual correct answer is number 2. In case 1, there are 200 results, in case 2 there are 300 results. I don't know why the 2 results are different. Or even if they are different, shouldn't there be more results from number 1?  
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get di... See more...
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get different results when I search if I include a "field2" in the results. Can I search and return all results weather or not "field2" exists? | top field1 = all possible results | top field1 field2 field3 = only results with all fields What I want is just to show a blank line where "field2" would be on matches that don't have a "field2". Basically make "field2" optional.
Hi, Did anyone come across adding "Oracle Autonomous DB" monitoring using "Wallet" on AppDynamics. Need some help with JDBC string with using a Wallet file. Regards, Vinodh
with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configur... See more...
with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configured TAs on splunkbase  include the magic 3 or 4 what happened to the rest of them? Is it always a best practice to  include all 8?
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. ... See more...
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. My time range looks like: Earliest Time: -2h  Latest Time: now cron schedule: 1 */2 * * * for each new search I add +1 to the minute tab of the cron schedule up to 59 and then start over.  so on the next search the schedule would be 2 */2 * * * and so on... is there a more efficient way I should be scheduling searches? Thank you.
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given ... See more...
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given only high and checked the Neap policy as well. under episode information found the severity we had given the same as the first event. Can someone please guide how to avoid the info episode  and how to find the path to configure for info severity Regards, Nagalakshmi 
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Wher... See more...
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Where do I begin resolving my issue.   Thanks