All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@klowk when you added the passAuth = <admin user>, did you not have to specify the password anywhere? how does it authenticate the user
I did not, as said above in my post, i'm very new to the subject and i asked how to check if the conf was taken into account. Thanks for telling me how, i did check and splunk does seem to take the d... See more...
I did not, as said above in my post, i'm very new to the subject and i asked how to check if the conf was taken into account. Thanks for telling me how, i did check and splunk does seem to take the default conf written.
@jlstanley did you find a fix for this? Im running into the error as well
Como criar uma busca de emprego através de uma API REST?   A ferramenta que devo usar é o Azure Data Factory para chamar uma API REST.   Estou a efetuar um POST Search com url="  https://edp.... See more...
Como criar uma busca de emprego através de uma API REST?   A ferramenta que devo usar é o Azure Data Factory para chamar uma API REST.   Estou a efetuar um POST Search com url="  https://edp.splunkcloud.com:8089/services/search/v2/jobs?output_mode=json " e body={\n \"search\": \"search%20index%3D\"oper_event_dynatrace_perf\" source=\"dynatrace_timeseries_metrics_v2://dynatrace_synthetic_browser_totalduration\"%20earliest%3D-96h}"   Na resposta ao POST a API envolve um sheduler SID que faz referência a uma pesquisa que não é o que eu coloquei no search do POST. Verifiquei no Activity>Jobs do Splunk e não foi criado nenhum Job associado ao meu search nem ao meu usuário.   Como posso construir o POST search para criar um Job do meu search através da API do Splunk ?   Entrada: { "método": "POST", "cabeçalhos": { "Tipo de conteúdo": "aplicativo/json; conjunto de caracteres=UTF-8" }, "url": " https://edp.splunkcloud.com:8089/services/search/v2/jobs?output_mode=json ", "connectVia": { "referenceName": "integrationRuntime1", "tipo": "IntegrationRuntimeReference" }, "corpo": " {\n \"pesquisar\": \"pesquisar%20índice%3D\"oper_event_dynatrace_perf\" fonte=\"dynatrace_timeseries_metrics_v2://dynatrace_synthetic_browser_totalduration\"%20mais%3D-96h}", "autenticação": { "tipo": "Básico", "nome do usuário": "saazrITAnalytD01", "senha": { "tipo": "SecureString", "valor": "***********" } } } Saída: { "ligações": {}, "origem": " https://edp.splunkcloud.com:8089/services/search/v2/jobs ", "atualizado": "2024-11-21T16:04:41Z", "gerador": { "construir": "be317eb3f944", "versão": "9.2.2406.109" }, "entrada": [ { "name": "search ```Verifique se algum dos modelos ..., "id": " https://edp.splunkcloud.com:8089/services/search/v2/jobs/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116 ", "atualizado": "2024-11-21T09:00:30.684Z", "ligações": { "alternativa": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW 9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116", "search_telemetry.json": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/search_telemetry.json", "search.log": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/search.log", "eventos": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/eventos", "resultados": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/resultados", "results_preview": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/results_preview", "linha do tempo": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/linha do tempo", "resumo": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/resumo", "controle": "/serviços/pesquisa/v2/empregos/agendador_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/controle" }, "publicado": "2024-11-21T09:00:27Z", "autor": tiago.goncalves@border-innovation.com , "contente": { "bundleVersion": "11289842698950824761", "canSummarize": falso, "cursorTime": "1970-01-01T00:00:00Z", "defaultSaveTTL": "604800", "defaultTTL": "600", "delegar": "agendador", "diskUsage": 593920, "dispatchState": "CONCLUÍDO", "feitoProgresso": 1, "contagem de gotas": 0, "earliestTime": "2024-11-21T00:00:00Z", "eventoDisponívelContagem": 0, "Contagem de eventos": 0, "eventFieldCount": 0, "eventIsStreaming": falso, "eventIsTruncated": falso, "eventSearch": "pesquisar (index=_internal ...", "eventSorting": "nenhum", "isBatchModeSearch": verdadeiro, "isDone": verdadeiro, "isEventsPreviewEnabled": falso, "isFailed": falso, "isFinalized": falso, "isPaused": falso, "isPreviewEnabled": falso, "isRealTimeSearch": falso, "isRemoteTimeline": falso, "isSaved": falso, "isSavedSearch": verdadeiro, "isTimeCursored": verdadeiro, "isZombie": falso, "is_prjob": verdadeiro, "palavras-chave": "app::aiops_storage_projection index::_internal result_count::0 \"savedsearch_name::edp aiops sp*\" search_type::scheduled source::*scheduler.log", "label": "EDP AIOPS - Falha no treino dos modelos de previsão", "latestTime": "2024-11-21T09:00:00Z", "normalizedSearch": "litsearch (índice=_interno ..., "numPreviews": 0, "optimizedSearch": "| pesquisa (índice=_internal app=..., "phase0": "litsearch (índice=_interno ..., "phase1": "addinfo tipo=contagem rótulo..., "pid": "3368900", "prioridade": 5, "proveniência": "agendador", "remoteSearch": "litsearch (índice=_interno ..., "reportSearch": "tabela _time..., "resultadoContagem": 0, "resultIsStreaming": falso, "resultPreviewCount": 0, "runDuration": 3.304000000000000003, "sampleRatio": "1", "sampleSeed": "0", "savedSearchLabel": "{\"proprietário\":\ tiago.goncalves@border-innovation.com\ ,\"app\":\"aiops_storage_projection\",\"compartilhamento\":\"app\"}", "scanCount": 10, "search": "search ```Verifique se ..., "searchCanBeEventType": falso, "searchEarliestTime": 1732147200, "pesquisarÚltimaHora": 1732179600, "searchTotalBucketsCount": 48, "searchTotalEliminatedBucketsCount": 14, "sid": "agendador_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116", "statusBuckets": 0, "ttl": 147349, ... } } } }
Fixed with  .highcharts-series.highcharts-series-0.highcharts-column-series.highcharts-tracker rect { fill:#28a197; stroke:#28a197;} .highcharts-series.highcharts-series-1.highcharts-column-s... See more...
Fixed with  .highcharts-series.highcharts-series-0.highcharts-column-series.highcharts-tracker rect { fill:#28a197; stroke:#28a197;} .highcharts-series.highcharts-series-1.highcharts-column-series.highcharts-tracker rect { fill:#f47738; stroke: #f47738;} .highcharts-series.highcharts-series-2.highcharts-column-series.highcharts-tracker rect { fill:#6f72af; stroke: #6f72af;}
HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes... See more...
HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes.conf) that seems to me they are related to indexer cluster not search heads cluster, am I true? 2- what is "cim_modactions index definition is used with the common action model alerts and auditing", i didnt know the actual meaning? Splunk Common Information Model (CIM) 
I do feel a bit stupid now.. My Cron was wrong. The method was perfectly sane. I did struggle to find any actual documentation to say that this was a way of doing it, so I hope this question will h... See more...
I do feel a bit stupid now.. My Cron was wrong. The method was perfectly sane. I did struggle to find any actual documentation to say that this was a way of doing it, so I hope this question will help future searchers determine that. Thanks for helping my grey matter along
Hi @Crotyo , could you share your search? Ciao. Giuseppe
OK. Did you verify what Splunk actually sees? | rest /data/indexes/myindex Some of this info you can also see in Settings->Indexes  
I am getting ready to attempt the rapid7 Nexpose addon. Did it end up working for you? I am wondering if there is a better approach since the app only has two stars on splunk base and is not a splunk... See more...
I am getting ready to attempt the rapid7 Nexpose addon. Did it end up working for you? I am wondering if there is a better approach since the app only has two stars on splunk base and is not a splunk supported app. 
I tried that and the search return empty. I tried checking the inputlookup command and it did list all the numbers.
I did try that and the search result return empty.
And you checked your effective settings with btool?
Thank you so much for your response. However, I did it this way because I wanted to bypass ingesting logs into Splunk index and just collect it as lookup which anyone can use later on. Also, it was ... See more...
Thank you so much for your response. However, I did it this way because I wanted to bypass ingesting logs into Splunk index and just collect it as lookup which anyone can use later on. Also, it was working previously until Splunk upgrade and I had to upgrade the add-on. So, I do not understand why it was working previously and then stop working.  
Thanks for your input ! Your explanations were clear but it does not explain how/why my index did not roll the buckets after reaching the maxTotalDataSizeMB of 5GB and went up to 35GB.
Ok, but the indexes are all set with a maxTotalDataSIze of 5GB (default set up written in my indexes.conf), which from what i understood should have stop each indexes, individually, exceeding this si... See more...
Ok, but the indexes are all set with a maxTotalDataSIze of 5GB (default set up written in my indexes.conf), which from what i understood should have stop each indexes, individually, exceeding this size and force the older warm buckets to cold to avoid saturation.   The doc :  https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Indexesconf  maxTotalDataSizeMB = <nonnegative integer> * The maximum size of an index, in megabytes. * If an index grows larger than the maximum size, splunkd freezes the oldest data in the index. * This setting applies only to hot, warm, and cold buckets. It does not apply to thawed buckets. ... However the saturation dit happen with one of them, that is the issue i don't understand. My disk is 40GB, and the saturation of this specific index reached 35GB and thus reached the minimum disk space and thus failed my indexer. The rolling criteria was met, why didn't it rolled the buckets ?
OK. See my response there - https://community.splunk.com/t5/Deployment-Architecture/How-do-I-enforce-disk-usage-on-volumes-by-index/m-p/703959/highlight/true#M28814 Additionally, because I'm not sur... See more...
OK. See my response there - https://community.splunk.com/t5/Deployment-Architecture/How-do-I-enforce-disk-usage-on-volumes-by-index/m-p/703959/highlight/true#M28814 Additionally, because I'm not sure if this has been said here or not - just because you define something as a volume, doesn't mean that everything "physically located" in that directory is treated by Splunk as that volume. So if you define a volume like in your case: [volume:MyVolume] path = $SPLUNK_DB  you must explicitly use that volume when defining index parameters. Otherwise it will not be considered a part of this volume. In other words if your index has   coldPath = volume:MyVolume/myindexsaturated/colddb   this directory will be managed with normal per-index constraints as well as global volume-based constraints. But if you define it as coldPath = $SPLUNK_DB/myindexsaturated/colddb even though it is in exactly the same place on the disk, it is not considered part of that volume.
There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation More details to these limits can be found in transactions stanz... See more...
There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation More details to these limits can be found in transactions stanza in limits.conf - Splunk Documentation  
There is nothing technically wrong with the current setting.  Warm buckets did not roll to cold because none of the criteria for rolling buckets were met.  Reaching the minimum disk space is not a cr... See more...
There is nothing technically wrong with the current setting.  Warm buckets did not roll to cold because none of the criteria for rolling buckets were met.  Reaching the minimum disk space is not a criterium.  Buckets roll either because the index is too full, the bucket(s) are too old, or the maximum number of warm buckets has been reached.
We are using Splunk Enterprise Version: 9.3.1 and we need it for Classic Dashboard  What I managed to put together is this : <html> <style type="text/css"> table tr:nth-... See more...
We are using Splunk Enterprise Version: 9.3.1 and we need it for Classic Dashboard  What I managed to put together is this : <html> <style type="text/css"> table tr:nth-child(odd) td{color: red; } table tr:nth-child(even) td{color: green; } </style> </html> It looks like this :       What I actually need is to Select rows Containing  INFO / ERROR / WARNING and color them RED , BLUE , YELLOW