All Topics

Top

All Topics

i have events that contains a specific field that sometimes contain a very long field which make the rest of the event be truncated, i want to remove this field or change it "long field detected". t... See more...
i have events that contains a specific field that sometimes contain a very long field which make the rest of the event be truncated, i want to remove this field or change it "long field detected". the problematic field call "file" and i should catch it's last appearnce, also i want the data after it so i should stop the removal after the first "," (comma). also the event contains nested fields. i've tried props.conf+transform conf like that:ete but it doesn't work. here is an example for 1 event: deleted due to security reasons 
Hi, I wonder the easiest way to monitor the deletion of files/folders in a CIFS netapp using splunk. I saw an Add-on available, could someone share any experience with this use case? I have a SC4S... See more...
Hi, I wonder the easiest way to monitor the deletion of files/folders in a CIFS netapp using splunk. I saw an Add-on available, could someone share any experience with this use case? I have a SC4S in place so I thought to configure syslog in NetApp to be sent to SC4S and start digging into the logs. Is there any App I could leverage to ease the pain? many thanks  
Hello everyone, I'm facing a persistent issue with executing a script via a playbook in Splunk SOAR that uses WinRM. Here's the context: I've created a playbook that is supposed to isolate a host v... See more...
Hello everyone, I'm facing a persistent issue with executing a script via a playbook in Splunk SOAR that uses WinRM. Here's the context: I've created a playbook that is supposed to isolate a host via WinRM. The script works perfectly when I run it manually using the "Run Script" action from Splunk SOAR: the host gets isolated. However, when the same script is executed by the playbook, the execution is marked as "successful," but none of the expected outcomes occur: the host is not isolated. To be more precise: I added an elevation check in the script, which relaunches in administrator mode with -Verb RunAs if necessary. This works perfectly for the manual action. The script writes to a log file located in C:\Users\Public\Documents to avoid permission issues, but the log file is not created when executed by the playbook. I've tried other directories and even simplified the logic to just disable a network adapter with Disable-NetAdapter, but nothing seems to work. In summary, everything works fine when done manually, but not when automated via the playbook. I have the impression that there's a difference in context between manual execution and playbook execution that's causing the issue, perhaps related to permissions or WinRM session restrictions. Does anyone have any idea what might be preventing the playbook from executing this script correctly, or any suggestions for workarounds? I'm really running out of ideas and any help would be greatly appreciated. Thanks in advance!
I am trying to configure Splunk to ingest only application, system and security logs from my local machine. But I can't find "Local event log collection" on my Splunk enterprise on my MacBook.  But ... See more...
I am trying to configure Splunk to ingest only application, system and security logs from my local machine. But I can't find "Local event log collection" on my Splunk enterprise on my MacBook.  But on my former laptop, which was a windows OS, I could find the "Local event log collection" option in the data input section.  Please how can I go about this?  
Hi Community, Trying to build regex that can help me reduce the size of an EventCode in my case this is 4627 The idea is to use props and transforms: props.conf [XmlWinEventLog:Security] TRANSFO... See more...
Hi Community, Trying to build regex that can help me reduce the size of an EventCode in my case this is 4627 The idea is to use props and transforms: props.conf [XmlWinEventLog:Security] TRANSFORMS-reduce_raw = reduce_event_raw transforms.conf [reduce_event_raw] REGEX = <Event[^>]*>.*?<System>.*?<Provider\s+Name='(?<ProviderName>[^']*)'\s+Guid='(?<ProviderGuid>[^']*)'.*?<EventID>(?<EventID>\d+)</EventID>.*?<Version>(?<Version>\d+)</Version>.*?<Level>(?<Level>\d+)</Level>.*?<Task>(?<Task>\d+)</Task>.*?<Opcode>(?<Opcode>\d+)</Opcode>.*?<Keywords>(?<Keywords>[^<]*)</Keywords>.*?<TimeCreated\s+SystemTime='(?<SystemTime>[^']*)'.*?<EventRecordID>(?<EventRecordID>\d+)</EventRecordID>.*?<Correlation\s+ActivityID='(?<ActivityID>[^']*)'.*?<Execution\s+ProcessID='(?<ProcessID>\d+)'\s+ThreadID='(?<ThreadID>\d+)'.*?<Channel>(?<Channel>[^<]*)</Channel>.*?<Computer>(?<Computer>[^<]*)</Computer>.*?<EventData>.*?<Data\s+Name='SubjectUserSid'>(?<SubjectUserSid>[^<]*)</Data>.*?<Data\s+Name='SubjectUserName'>(?<SubjectUserName>[^<]*)</Data>.*?<Data\s+Name='SubjectDomainName'>(?<SubjectDomainName>[^<]*)</Data>.*?<Data\s+Name='SubjectLogonId'>(?<SubjectLogonId>[^<]*)</Data>.*?<Data\s+Name='TargetUserSid'>(?<TargetUserSid>[^<]*)</Data>.*?<Data\s+Name='TargetUserName'>(?<TargetUserName>[^<]*)</Data>.*?<Data\s+Name='TargetDomainName'>(?<TargetDomainName>[^<]*)</Data>.*?<Data\s+Name='TargetLogonId'>(?<TargetLogonId>[^<]*)</Data>.*?<Data\s+Name='LogonType'>(?<LogonType>[^<]*)</Data>.*?<Data\s+Name='EventIdx'>(?<EventIdx>[^<]*)</Data>.*?<Data\s+Name='EventCountTotal'>(?<EventCountTotal>[^<]*)</Data>.*?<Data\s+Name='GroupMembership'>(?<GroupMembership>.*?)</Data>.*?</EventData>.*?</Event> FORMAT = ProviderName::$1 ProviderGuid::$2 EventID::$3 Version::$4 Level::$5 Task::$6 Opcode::$7 Keywords::$8 SystemTime::$9 EventRecordID::$10 ActivityID::$11 ProcessID::$12 ThreadID::$13 Channel::$14 Computer::$15 SubjectUserSid::$16 SubjectUserName::$17 SubjectDomainName::$18 SubjectLogonId::$19 TargetUserSid::$20 TargetUserName::$21 TargetDomainName::$22 TargetLogonId::$23 LogonType::$24 EventIdx::$25 EventCountTotal::$26 GroupMembership::$27 DEST_KEY = _raw Then I will be able to pick which bits from the raw data to be indexed It looks like the regex would not pick up on fields correctly There is the raw event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3bxxxxxx}'/><EventID>4627</EventID><Version>0</Version><Level>0</Level><Task>12554</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2024-11-27T11:27:45.6695363Z'/><EventRecordID>2177113</EventRecordID><Correlation ActivityID='{01491b93-40a4-0002-6926-4901a440db01}'/><Execution ProcessID='1196' ThreadID='1312'/><Channel>Security</Channel><Computer>Computer1</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-5-18</Data><Data Name='SubjectUserName'>CXXXXXX</Data><Data Name='SubjectDomainName'>CXXXXXXXX</Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='TargetUserSid'>S-1-5-18</Data><Data Name='TargetUserName'>SYSTEM</Data><Data Name='TargetDomainName'>NT AUTHORITY</Data><Data Name='TargetLogonId'>0x3e7</Data><Data Name='LogonType'>5</Data><Data Name='EventIdx'>1</Data><Data Name='EventCountTotal'>1</Data><Data Name='GroupMembership'> %{S-1-5-32-544} %{S-1-1-0} %{S-1-5-11} %{S-1-16-16384}</Data></EventData></Event Any help t-shoot the problem will be highly valued. Thank you in advance!
Dear All, I am facing difficulty in loading all the evtx files in a folder to Splunk. I am using free Splunk version for learning. My folder has 306 files, Splunk loaded only 212 files. In another ... See more...
Dear All, I am facing difficulty in loading all the evtx files in a folder to Splunk. I am using free Splunk version for learning. My folder has 306 files, Splunk loaded only 212 files. In another case my folder has 47 files, but Splunk loaded only 3 files. I am having this issue even after trying multiple times while the count of files loaded successfully keeps changing. Kindly help me with the possible reasons of this happening. MMM 
Hi Splunkers, I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have props.conf [my_sourcetype] TRA... See more...
Hi Splunkers, I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have props.conf [my_sourcetype] TRANSFORMS-set = dns, external  and transforms.conf [dns] REGEX ="dstport=53" DEST_KEY = queue FORMAT = nullQueue [external] REGEX = "to specific external IP range" DEST_KEY = queue FORMAT = nullQueue So my HWF drops those events and the "rest" is ingested to the indexer. (on-prem). - so far so good... One of our operational teams requested that I ingest "their" logs to their Splunk Cloud instance. How I can technically do this?  1. I want to keep all the logs on the on-prem indexer with the filtering 2. I want to ingest events from a specific IP range to Splunk Cloud without filtering BR,  Norbert
Hello, My apologies, I hope this makes sense, still learning.  I have events coming in that look like this: I need to create an alert for when state = 1 for name = VZEROP002.  But, I can't figu... See more...
Hello, My apologies, I hope this makes sense, still learning.  I have events coming in that look like this: I need to create an alert for when state = 1 for name = VZEROP002.  But, I can't figure out how to write the query to only look at the state for VZEROP002.  The query I'm running is: index=zn | spath "items{1}.state" | search "items{1}.state"=1   But, the search results still return events where VZEROP002 has a state of 2, and VZEROP001 has the state of 1. I hope that makes sense, and thanks in advance for any help with this. Thanks, Tom    
I usually have to make document of splunk dashboard and its really time consuming as well , so I was thinking maybe I can automate it. So that it can make a simple document of any dashboard. Is it po... See more...
I usually have to make document of splunk dashboard and its really time consuming as well , so I was thinking maybe I can automate it. So that it can make a simple document of any dashboard. Is it possible?
Hi All I there any way to freeze the tile in the dashboard when we scroll down in the dashboard.   
Hi  Any help or use case for the below question ?? How do i share a dashboard to the internal team as an URL link , where it won't ask to enter user name and password and login directly into the da... See more...
Hi  Any help or use case for the below question ?? How do i share a dashboard to the internal team as an URL link , where it won't ask to enter user name and password and login directly into the dashboard as Read only ( Dashboard Studio).
In the process of upgrading our splunk enterprise. Currently on version 7.1.3 (I know, super old, bear with me). Installed the Splunk Platform Readiness App v.2.2.1, set the permissions to write as t... See more...
In the process of upgrading our splunk enterprise. Currently on version 7.1.3 (I know, super old, bear with me). Installed the Splunk Platform Readiness App v.2.2.1, set the permissions to write as the documentation states. Go to launch the app and I get this error:  Error reading progress for user: <me> on host <hostname>   Dig a bit more into it and realize that the Splunk Platform Readiness App uses the KV store. Run into these errors: KV Store process terminated abnormally (exit code 14, status exited with code 14) See mongod.log and splunkd.log for details KV Store changed status to failed. KV Store process terminated. Failed to start KV Store process. See mongod.log and splunkd.log for details. *******Splunk is running on Windows Server******* I tried renaming the server.pem file in Splunk/etc/auth and restarting - it made a new server.pem file, same issues persist. Attempted to look into the mongod.log and splunkd.log but I'm not sure what I should be looking for.  Haven't yet tried to rename the mongo folder in /vat/lib/splunk/kvstore to mongo(old), as I saw that it worked for some other people with the same issue.     Did some more troubleshooting: renamed the mongo folder to mongo(old) and it recreated a new one. Same issues as before. Looked in the mongod.log file and found this: Detected unclean shutdown - C:\Program Files\Splunk\var\lib\kvstore\mongo\mongod.lock is not empty. InFile::open(), CreateFileW for C:\Program Files\Splunk\var\lib\splunk\kvstore\mongo\journal\lsn failed with Access is denied.
Hey there, Let me start off by saying I can delete labels if there are no assets using them. The issue originates when an asset is "using" these labels but I cannot tell how.   For some reason we ... See more...
Hey there, Let me start off by saying I can delete labels if there are no assets using them. The issue originates when an asset is "using" these labels but I cannot tell how.   For some reason we have "event" and "events" where I would like to delete the unused "event" label. But there's an asset using it. Looking under all configured assets I cannot find where the label "event" is used.   How can I accomplish my goal of finding the asset that is listed, when it's only a simple description: 1 Asset (asset name)   When looking at all my assets, only one matches. But inside this asset for the app Rest API, I can't find any mention or designation for labels whatsoever. The asset  
The goal is to get Entra logs into Splunk Cloud and alert on non-domain affiliated logins. Can't seem to find any documentation on.
Hello, I have below inputs stanza to monitor the syslog feed coming to index=base,  Now we need to filter the out with a specific host names and re route them to new index monitor:///usr/loc... See more...
Hello, I have below inputs stanza to monitor the syslog feed coming to index=base,  Now we need to filter the out with a specific host names and re route them to new index monitor:///usr/local/apps/logs/*/base_log/*/*/*/*.log] disabled = 0 sourcetype = base:syslog index = base host_segment = 9 example I have hosts  (serverxyz.myserver.com, myhostabc.myserver.com, myhostuvw.myserver.com), now i want to match *xyz* and *abc* and re route to new index. since the old config has /*/ which feeds everything to old index i wanted to add balklist to the old stanza to avoid ingesting to both index. OLD Stanza : monitor:///usr/local/apps/logs/*/base_log/*/*/*/*.log] disabled = 0 sourcetype = base:syslog index = base host_segment = 9 blacklist = (*xyz*|.*\/*abc*\/) NEW  Stanza : monitor:///usr/local/apps/logs/*/base_log/*/*/*xyz*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9 monitor:///usr/local/apps/logs/*/base_log/*/*/*abc*/*.log] disabled = 0 sourcetype = base:syslog index = mynewindex host_segment = 9
Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! Whether you’re decking the halls or cozying up by the fire, we’ve got the perfect way to learn... See more...
Celebrate the season with our December lineup of Community Office Hours, Tech Talks, and Webinars! Whether you’re decking the halls or cozying up by the fire, we’ve got the perfect way to learn and connect this month. Check out the details below!   What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where participants can ask questions and engage with technical Splunk experts on various topics. Whether you're just starting your journey with Splunk or looking for best practices to take your deployment to the next level, Community Office Hours provides a safe and open environment for you to get help. If you have an issue you can’t seem to resolve, have a question you’re eager to get answered by Splunk experts, are exploring new use cases, or just want to sit and listen in, Community Office Hours is for you!   What are Tech Talks? Tech Talks are designed to accelerate adoption and ensure your success. In these engaging 60-minute sessions, we dive deep into best practices, share valuable insights, and explore additional use cases to expand your knowledge and proficiency with our products. Whether you're looking to optimize your workflows, discover new functionalities, or troubleshoot challenges, Tech Talks is your go-to resource. SECURITY Office Hours | Splunk Threat Research Team: Security Content December 11, 2024 at 1pm PT This is an opportunity to directly ask members of the Splunk Threat Research Team your questions, such as... What are the latest security content updates from the Splunk Threat Research Team? What are the best practices for accessing, implementing, and using the team’s security content? What tips and tricks can help leverage Splunk Attack Range, Contentctl, and other resources developed by the Splunk Threat Research Team? Any other questions about the team’s content and resources! OBSERVABILITY Office Hours | Kubernetes Observability December 10, 2024 at 1pm PT What can you ask in this AMA? How do I use and customize Kubernetes navigators? What are best practices for optimizing Kubernetes alerts and troubleshooting workflows? Is there a way to view Kubernetes logs correlated with metrics? How do I review Pod status? How do I monitor Kubernetes resource limits? Anything else you’d like to learn! PLATFORM Office Hours | Awesome Admins: Running a Healthy Splunk Platform Environment December 12, 2024 at 1pm PT What can you ask in this AMA? What should I be looking at as a Splunk Cloud or Splunk Enterprise Admin, and why? What are some best practices for using workload management? How can I set up a scalable architecture? What are some best practices for monitoring system health with the Cloud Monitoring Console? What are some tips for managing and balancing disaster recovery? Any best practices for managing large numbers of users? Which admin tasks should I be streamlining with ACS? Anything else you'd like to learn!
Hello, We have a query for an alert that was working prior, but is no longer returning the correct results. We haven't changed anything on our instance, so I'm not sure as to what would be the caus... See more...
Hello, We have a query for an alert that was working prior, but is no longer returning the correct results. We haven't changed anything on our instance, so I'm not sure as to what would be the cause. Query is below (I blanked out the index names, etc of course). I tested it with an different query which is returning the expected results, but I'd like to figure out what's going on with this one. index=testindex OR index=testindex2 source="insertpath" ErrorCodesResponse=PlanInvalid | search TraceId=* | stats values(TraceId) as TraceId | mvexpand TraceId | join type=inner TraceId [search index=test ("Test SKU") | fields TraceId,@t,@mt,RequestPath] | eval date=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%Y-%m-%d"), time=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%H:%M") | table time, date, TraceId, @MT,RequestPath
Hi everyone, I am trying to create a multi KPI alert. I have tens of services with 4-5 KPIs each. Using the multi KPI alert I want to create a correlation search which can send me an email alert if ... See more...
Hi everyone, I am trying to create a multi KPI alert. I have tens of services with 4-5 KPIs each. Using the multi KPI alert I want to create a correlation search which can send me an email alert if any of the KPIs are in critical severity for more than15 minutes.  After selecting Status over time in the MultiKPI creation window, we have to set trigger for each of the KPIs.  Is there a way to set the same trigger for all the KPIs? For example if any KPI is at Critical severity level >=50% of the last 30 minutes. Seems like I am missing something, no way I have to click and set trigger for each KPI hundreds of times. Thanks!
Como criar uma busca de emprego através de uma API REST?   A ferramenta que devo usar é o Azure Data Factory para chamar uma API REST.   Estou a efetuar um POST Search com url="  https://edp.... See more...
Como criar uma busca de emprego através de uma API REST?   A ferramenta que devo usar é o Azure Data Factory para chamar uma API REST.   Estou a efetuar um POST Search com url="  https://edp.splunkcloud.com:8089/services/search/v2/jobs?output_mode=json " e body={\n \"search\": \"search%20index%3D\"oper_event_dynatrace_perf\" source=\"dynatrace_timeseries_metrics_v2://dynatrace_synthetic_browser_totalduration\"%20earliest%3D-96h}"   Na resposta ao POST a API envolve um sheduler SID que faz referência a uma pesquisa que não é o que eu coloquei no search do POST. Verifiquei no Activity>Jobs do Splunk e não foi criado nenhum Job associado ao meu search nem ao meu usuário.   Como posso construir o POST search para criar um Job do meu search através da API do Splunk ?   Entrada: { "método": "POST", "cabeçalhos": { "Tipo de conteúdo": "aplicativo/json; conjunto de caracteres=UTF-8" }, "url": " https://edp.splunkcloud.com:8089/services/search/v2/jobs?output_mode=json ", "connectVia": { "referenceName": "integrationRuntime1", "tipo": "IntegrationRuntimeReference" }, "corpo": " {\n \"pesquisar\": \"pesquisar%20índice%3D\"oper_event_dynatrace_perf\" fonte=\"dynatrace_timeseries_metrics_v2://dynatrace_synthetic_browser_totalduration\"%20mais%3D-96h}", "autenticação": { "tipo": "Básico", "nome do usuário": "saazrITAnalytD01", "senha": { "tipo": "SecureString", "valor": "***********" } } } Saída: { "ligações": {}, "origem": " https://edp.splunkcloud.com:8089/services/search/v2/jobs ", "atualizado": "2024-11-21T16:04:41Z", "gerador": { "construir": "be317eb3f944", "versão": "9.2.2406.109" }, "entrada": [ { "name": "search ```Verifique se algum dos modelos ..., "id": " https://edp.splunkcloud.com:8089/services/search/v2/jobs/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116 ", "atualizado": "2024-11-21T09:00:30.684Z", "ligações": { "alternativa": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW 9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116", "search_telemetry.json": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/search_telemetry.json", "search.log": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/search.log", "eventos": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/eventos", "resultados": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/resultados", "results_preview": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/results_preview", "linha do tempo": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/linha do tempo", "resumo": "/serviços/pesquisa/v2/empregos/scheduler_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/resumo", "controle": "/serviços/pesquisa/v2/empregos/agendador_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116/controle" }, "publicado": "2024-11-21T09:00:27Z", "autor": tiago.goncalves@border-innovation.com , "contente": { "bundleVersion": "11289842698950824761", "canSummarize": falso, "cursorTime": "1970-01-01T00:00:00Z", "defaultSaveTTL": "604800", "defaultTTL": "600", "delegar": "agendador", "diskUsage": 593920, "dispatchState": "CONCLUÍDO", "feitoProgresso": 1, "contagem de gotas": 0, "earliestTime": "2024-11-21T00:00:00Z", "eventoDisponívelContagem": 0, "Contagem de eventos": 0, "eventFieldCount": 0, "eventIsStreaming": falso, "eventIsTruncated": falso, "eventSearch": "pesquisar (index=_internal ...", "eventSorting": "nenhum", "isBatchModeSearch": verdadeiro, "isDone": verdadeiro, "isEventsPreviewEnabled": falso, "isFailed": falso, "isFinalized": falso, "isPaused": falso, "isPreviewEnabled": falso, "isRealTimeSearch": falso, "isRemoteTimeline": falso, "isSaved": falso, "isSavedSearch": verdadeiro, "isTimeCursored": verdadeiro, "isZombie": falso, "is_prjob": verdadeiro, "palavras-chave": "app::aiops_storage_projection index::_internal result_count::0 \"savedsearch_name::edp aiops sp*\" search_type::scheduled source::*scheduler.log", "label": "EDP AIOPS - Falha no treino dos modelos de previsão", "latestTime": "2024-11-21T09:00:00Z", "normalizedSearch": "litsearch (índice=_interno ..., "numPreviews": 0, "optimizedSearch": "| pesquisa (índice=_internal app=..., "phase0": "litsearch (índice=_interno ..., "phase1": "addinfo tipo=contagem rótulo..., "pid": "3368900", "prioridade": 5, "proveniência": "agendador", "remoteSearch": "litsearch (índice=_interno ..., "reportSearch": "tabela _time..., "resultadoContagem": 0, "resultIsStreaming": falso, "resultPreviewCount": 0, "runDuration": 3.304000000000000003, "sampleRatio": "1", "sampleSeed": "0", "savedSearchLabel": "{\"proprietário\":\ tiago.goncalves@border-innovation.com\ ,\"app\":\"aiops_storage_projection\",\"compartilhamento\":\"app\"}", "scanCount": 10, "search": "search ```Verifique se ..., "searchCanBeEventType": falso, "searchEarliestTime": 1732147200, "pesquisarÚltimaHora": 1732179600, "searchTotalBucketsCount": 48, "searchTotalEliminatedBucketsCount": 14, "sid": "agendador_dGlhZ28uZ29uY2FsdmVzQGJvcmRlci1pbm5vdmF0aW9uLmNvbQ_YWlvcHNfc3RvcmFnZV9wcm9qZWN0aW9u__RMD546f44b20564d9b63_at_1732179600_6116", "statusBuckets": 0, "ttl": 147349, ... } } } }
HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes... See more...
HI Everyone, Hope you all are doing well.   I am trying to deploy the CIM on Search Head Cluster Environment, and I have some questions: 1- I found under /default two files (inputs.conf & indexes.conf) that seems to me they are related to indexer cluster not search heads cluster, am I true? 2- what is "cim_modactions index definition is used with the common action model alerts and auditing", i didnt know the actual meaning? Splunk Common Information Model (CIM)