All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would... See more...
Hi all I have a search that works for a range of a few days (eg earliest=-7d@d), but when running for alltime it breaks. I suspect this is an issue with appendcols or streamstats? Any pointers would be appreciated. I'm using this to generate a lookup which I can then search instead of using an expensive alltime. index=ndx sourcetype=src (device="PM4") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM4Val by _time index | appendcols [ search index=ndx sourcetype=src (device="PM2") earliest=0 latest=@d | bucket _time span=1d | stats max(value) as PM2Val by _time index ] | streamstats current=f last(PM4Val) as LastPM4Val last(PM2Val) as LastPM2Val by index | eval PM4ValDelta = PM4Val - LastPM4Val, PM2ValDelta = PM2Val - LastPM2Val | table _time, index, PM4Val, PM4ValDelta, PM2Val, PM2ValDelta | sort index -_time  
Dears,   I am trying to calculate how the total duration each user spends connected through VPN, their total online time.   I am using the below search, but the issue is for example in a 24 hour ... See more...
Dears,   I am trying to calculate how the total duration each user spends connected through VPN, their total online time.   I am using the below search, but the issue is for example in a 24 hour range if the user logged in only for 10 minutes at 1AM then again for 1 hour at 11AM the duration output will be 10 hours as it takes the very first event then the very last event. How can I calculate based on timeslots that have events only?   index=pa src_zone="GP-VPN" src_user="*" | stats earliest(_time) AS earliest latest(_time) AS latest BY src_user | eval duration = tostring((latest-earliest)/60)     Timeline below, should be ~14hours:     Search Results, duration in minutes, resulting in 24 hours which is not correct due to gap time: user earliest latest duration user1 1719144008.192 1719230507.192 1441.6500
We have multiple forwarders sending data to an Intermediary forwarder and that IF is sending data to IDXs. IF is not storing any data in this case.   If we do compression on IF, will it automatical... See more...
We have multiple forwarders sending data to an Intermediary forwarder and that IF is sending data to IDXs. IF is not storing any data in this case.   If we do compression on IF, will it automatically apply on data coming from UFs or should we do this config on all UFs as well.
Hi team, I need to extract the highlighted field in the below messege using regex... I have tried Splunk inbuilt field extraction it is throwing error when i use multiple field...   { "eventTime":... See more...
Hi team, I need to extract the highlighted field in the below messege using regex... I have tried Splunk inbuilt field extraction it is throwing error when i use multiple field...   { "eventTime": "2024-06-24T06:15:42Z", "leaduuid": "1234455", "CrmId": "11111111", "studentCrmUuid": "634543564", "externalId": "", "SiteId": "xxxx", "subCategory": "", "category": "Course Enquiry", "eventId": "", "eventRegistrationId": "", "status": "Open", "source": "Online Enquiry", "leadId": "22222222",  "assignmentStatusCode": "", "assignmentStatus": "", "isFirstLead": "yes", "c4cEventId": "", "channelPartnerApplication": "no", "applicationReceivedDate": "", "referredBy": "", "referrerCounsellor": "", "createdBy": "Technical User",  "lastChangedBy": "Technical User" , "leadSubAgentID": "", "cancelReason": ""}, "offersInPrinciple": {"offersinPrinciple": "no", "oipReferenceNumber": "", "oipVerificationStatus": ""}, "qualification": {"qualification": "Unqualified", "primaryFinancialSource": ""}, "online": {"referringUrl": "", "idpNearestOffice": "", "sourceSiteId": "xxxxx", "preferredCounsellingMode": "", "institutionInfo": "", "courseName": "", "howDidYouHear": "Social Media"}
Hi Team, We are setting up minimalistic dashboards for application logs. application logs include local server logs, application logs, tibco logs, kibana logs. is there a standard dashboard setup ... See more...
Hi Team, We are setting up minimalistic dashboards for application logs. application logs include local server logs, application logs, tibco logs, kibana logs. is there a standard dashboard setup available for application log monitoring dashboard. Please guide me to create one dashboard for application log monitoring.   Thanks, 
Hello everyone, I am a newbie in this field, I am looking forward to your help. I am using Eventgen to create data samples for Splunk Enterprise.  I have a datamodel "Test", a dataset "datasetA" in... See more...
Hello everyone, I am a newbie in this field, I am looking forward to your help. I am using Eventgen to create data samples for Splunk Enterprise.  I have a datamodel "Test", a dataset "datasetA" in that datamodel, "datasetB" inherited from "datasetA" and "datasetC" inherited from "datasetB". All the data samples are satisfy with the base search and constraints of all datasets. It means all data samples are the sample in 3 datasets above. The problem is there are values of datasetA.fieldname, but not for datasetB.fieldname even datasetB is inherited from datasetA. Is there anyone have the same problem? More information: Sorry because i do not capture it   example: |tstats values(datasetA.action) from datamodel=Test ->result: 3 actions |stats values(datasetA.datasetB.action) from datamodel=Test ->result: no result found The data samples in datasetA and datasetB is the same Thank you for reading  
Hi all, I recently installed this add-one on my cluster (hfs, idxs, shs). I copied props.conf and transforms.conf into local directory and uncomment the mappings to sourcetype elastic:auditbeat:log.... See more...
Hi all, I recently installed this add-one on my cluster (hfs, idxs, shs). I copied props.conf and transforms.conf into local directory and uncomment the mappings to sourcetype elastic:auditbeat:log. But this action had no effect and yet I just see one sourcetype: elastic:auditbeat:log any ideas are appreciated. Thanks.
I have a dashboard where I have multiple form inputs and using them in multiple panels(which i have not given here). I don't want any panel to run the search before clicking on Submit button. But ... See more...
I have a dashboard where I have multiple form inputs and using them in multiple panels(which i have not given here). I don't want any panel to run the search before clicking on Submit button. But for one panel where I have not used any of the user inputs or tokens, it simply runs as soon as the dashboard loads, not sure if this is the expected behaviour and if I have to do additional token dependencies separately for that panel search to stop it from autorun, appreciate your valid suggestions here. <form version="1.1" theme="dark"> <fieldset autoRun="false" submitButton="true"> <input type="time" token="token_time" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-1d@d</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <chart> <title>Dashboard Title</title> <search> <query>Search Query </query> <earliest>-7d@d</earliest> <latest>now</latest> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row>
Hello,   I have a dashboard with multiselection + text input field.      <form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> ... See more...
Hello,   I have a dashboard with multiselection + text input field.      <form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> <fieldset submitButton="false"> <input type="multiselect" token="tokselect"> <label>Field</label> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <valueSuffix>=REPLACE</valueSuffix> <delimiter> OR </delimiter> <prefix>(</prefix> <suffix>)</suffix> <change> <eval token="tokfilter">replace($tokselect$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <eval token="tokfilter">replace($tokselect$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> </fieldset> <row> <panel> <event> <title>$tokfilter$</title> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </event> </panel> </row> </form>     Now it works like that if I choose something from 'Field' and add an optional text to 'Value' start searching like this category="*" OR severity="*". I'd like to build a free form where option where if the user choose that option from the 'Field' and add something in the 'Value', the search is only looking for the 'Value', like this "*" - except any field.   Could you please help me?   Thanks in advance!
I see some post about rules for splunk logs. But I don't find a list of rules. My applications logs a  lot of lines for splunk (100GB/day) and we prefere use the default integration in splunk (witho... See more...
I see some post about rules for splunk logs. But I don't find a list of rules. My applications logs a  lot of lines for splunk (100GB/day) and we prefere use the default integration in splunk (without transformation, extraction...) in order to save time during indexing. I propose to my developeurs to logs with these constraints. Where can I find all constraints, or the better constraints ... Please log like that : [%m-%d-%Y %H:%M:%S.%Q]key1=value1,key2=value2,... keys : not begin with number or '_' values : no spaces or commas else between quote  
Hi Splunkers, I need to know how to comment out a single line in an SPL query when working in search and reporting. Could someone please provide an example? Thanks,
Hi Team, could someone please help in letting me know. I have a requirement to display some events based on some search criteria and I want to create a drilldown and clicking on any of the legend. T... See more...
Hi Team, could someone please help in letting me know. I have a requirement to display some events based on some search criteria and I want to create a drilldown and clicking on any of the legend. There should be drilldown based on clicked event. 
Hello, have a nice day!   I have followed the Distributed Search document and create a dshborad.xml file and push it through the deployer, and i could find it in the search heads app as below: ... See more...
Hello, have a nice day!   I have followed the Distributed Search document and create a dshborad.xml file and push it through the deployer, and i could find it in the search heads app as below: hit Edit properties  cause the below ERROR: Also, I didn't find the dashboard under the dashboard tab to view it. =========================================================  
Can anyone tell me the best practice for splunkfwd user to access  others and root own dir/logs ?   Not interested in changing dir/log ownership. We could do ACL - lots of work there. Out of the ... See more...
Can anyone tell me the best practice for splunkfwd user to access  others and root own dir/logs ?   Not interested in changing dir/log ownership. We could do ACL - lots of work there. Out of the box what is the access level of the splunkfwd post install ?        
Input Event : [so much data exists in the same single line ] ,"Comments": "New alert", "Data": "{\"etype\":\"MalwareFamily\",\"at\":\"2024-06-21T11:34:07.0000000Z\",\"md\":\"2024-06-21T11:34:07.00000... See more...
Input Event : [so much data exists in the same single line ] ,"Comments": "New alert", "Data": "{\"etype\":\"MalwareFamily\",\"at\":\"2024-06-21T11:34:07.0000000Z\",\"md\":\"2024-06-21T11:34:07.0000000Z\",\"Investigations\":[{\"$id\":\"1\",\"Id\":\"urn:ZappedUrlInvestigation:2cc87ae3\",\"InvestigationStatus\":\"Running\"}],\"InvestigationIds\":[\"urn:ZappedUrlInvestigation:2cc8782d063\"],\"Intent\":\"Probing\",\"ResourceIdentifiers\":[{\"$id\":\"2\",\"AadTenantId\":\"2dfb29-729c918\",\"Type\":\"AAD\"}],\"AzureResourceId\":null,\"WorkspaceId\":null,\"Metadata\":{\"CustomApps\":null,\"GenericInfo\":null},\"Entities\":[{\"$id\":\"3\",\"MailboxPrimaryAddress\":\"abc@gmail.com\",\"Upn\":\"abc@gmail.com\",\"AadId\":\"6eac3b76357\",\"RiskLevel\":\"None\",\"Type\":\"mailbox\",\"Urn\":\"urn:UserEntity:10338af2b6c\",\"Source\":\"TP\",\"FirstSeen\":\"0001-01-01T00:00:00\"}, \"StartTimeUtc\": \"2024-06-21T10:12:37\", \"Status\": \"Investigation Started\"}","EntityType": "MalwareFamily", [so much data exists in the same single line ] In a single line, there exists so much data, I want to substitue(\") with (") only that falls between Data dictionary value, nothing before and nothing after. sample regex : https://regex101.com/r/Gsfaay/1 ( highlighted data only in group 4 should be modified.) And the Dictionary value is enclosed between quotes(as string) want it to be replaced by []braces as list ( group 3 and 6 ) Ouptut Required : [so much data exists in the same single line ],"Comments": "New alert", "Data": [{"etype":"MalwareFamily", so on,"Status":"Investigation Started"}],"EntityType": "MalwareFamily", [so much data exists in the same single line ]   Trials :  [testing_logs] SEDCMD-DataJson = s/\\\"/\"/g s/"Data": "{"/"Data": \[{"/g s/("Data": \[{".*})",/$1],/g INDEXED_EXTRACTIONS = json KV_MODE = json I tried it in the multiple steps as mentioned in my above example, but In splunk sedcmd works on the entire _raw value. I shouldnt apply it globally 1. regex101.com/r/0g2bcL/1  2. regex101.com/r/o3eFgJ/1  3. regex101.com/r/D7Of0v/1  only issue with the first regex, it shouldnt be applied globally on entire event value, it should be applying only between data dictionary value.
I want to set up an alert in Splunk that sends a message to two different public Slack channels. Currently, sending a message to one channel works fine, but I'm having trouble sending messages to mu... See more...
I want to set up an alert in Splunk that sends a message to two different public Slack channels. Currently, sending a message to one channel works fine, but I'm having trouble sending messages to multiple channels. Here is how I'm configuring it for a single Slack channel: Is there any way I can use to fix this issue using this UI?
Hi, Iam having this error since first of the june. Here is my splunkd.log   06-22-2024 14:54:00.405 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app... See more...
Hi, Iam having this error since first of the june. Here is my splunkd.log   06-22-2024 14:54:00.405 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=dbxquery_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.command.DbxQueryServerStart.startDbxQueryServer(DbxQueryServerStart.java:82)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:50)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:00.406 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:00.406 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:04.800 +0800 INFO ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" action=start_task_server, configFile=/opt/splunk/etc/apps/splunk_app_db_connect/config/dbx_task_server.yml 06-22-2024 14:54:04.842 +0800 INFO ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=start_dbxquery_server, configFile=/opt/splunk/etc/apps/splunk_app_db_connect/config/dbxquery_server.yml 06-22-2024 14:54:04.981 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" 14:54:04.980 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection 06-22-2024 14:54:05.015 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 14:54:05.013 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - initializing secret kv store collection 06-22-2024 14:54:05.102 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" 14:54:05.101 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store found, store=com.splunk.Entity@d7b1517 06-22-2024 14:54:05.129 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" 14:54:05.129 [main] INFO com.splunk.dbx.utils.SecurityFileGenerationUtil - secret KV Store found, store=com.splunk.Entity@d7b1517 06-22-2024 14:54:05.214 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" action=task_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.server.bootstrap.TaskServerStart.startTaskServer(TaskServerStart.java:108)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:69)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ 06-22-2024 14:54:05.215 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:05.215 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/server.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.server.bootstrap.TaskServerStart.streamEvents(TaskServerStart.java:74)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.server.bootstrap.TaskServerStart.main(TaskServerStart.java:145)\\ 06-22-2024 14:54:05.233 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" action=dbxquery_server_start_failed error=com.splunk.HttpException: HTTP 503 -- KV Store initialization failed. Please contact your system administrator. stack=com.splunk.HttpException.create(HttpException.java:84)\\com.splunk.DBXService.sendImpl(DBXService.java:132)\\com.splunk.DBXService.send(DBXService.java:44)\\com.splunk.HttpService.get(HttpService.java:172)\\com.splunk.dbx.model.repository.SecretKVStoreRepository.getSecrets(SecretKVStoreRepository.java:41)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.getSecretsFromKvStore(SecurityFileGenerationUtil.java:261)\\com.splunk.dbx.utils.SecurityFileGenerationUtil.initEncryption(SecurityFileGenerationUtil.java:51)\\com.splunk.dbx.command.DbxQueryServerStart.startDbxQueryServer(DbxQueryServerStart.java:82)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:50)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ 06-22-2024 14:54:05.233 +0800 WARN ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.MalformedDataException: Events must have at least the data field set to be written to XML. 06-22-2024 14:54:05.233 +0800 ERROR ExecProcessor [201562 ExecProcessor] - message from "/opt/splunk/etc/apps/splunk_app_db_connect/linux_x86_64/bin/dbxquery.sh" com.splunk.modularinput.Event.writeTo(Event.java:65)\\com.splunk.modularinput.EventWriter.writeEvent(EventWriter.java:137)\\com.splunk.dbx.command.DbxQueryServerStart.streamEvents(DbxQueryServerStart.java:51)\\com.splunk.modularinput.Script.run(Script.java:66)\\com.splunk.modularinput.Script.run(Script.java:44)\\com.splunk.dbx.command.DbxQueryServerStart.main(DbxQueryServerStart.java:95)\\ And here is the mongod.log 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslMode is deprecated. Please use tlsMode instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslPEMKeyFile is deprecated. Please use tlsCertificateKeyFile instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslPEMKeyPassword is deprecated. Please use tlsCertificateKeyFilePassword instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslCipherConfig is deprecated. Please use tlsCipherConfig instead. 2024-06-19T15:46:17.512+0800 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead. 2024-06-19T07:46:17.513Z W CONTROL [main] net.tls.tlsCipherConfig is deprecated. It will be removed in a future release. 2024-06-19T07:46:17.522Z W NETWORK [main] Server certificate has no compatible Subject Alternative Name. This may prevent TLS clients from connecting 2024-06-19T07:46:17.524Z W ASIO [main] No TransportLayer configured during NetworkInterface startup 2024-06-19T07:46:17.527Z I ACCESS [main] permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open I tried create new ssl certificate but it doesnt work. And tried change the permission of the /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key still encountering same error. What should i do? Please help.
What happens if indexer acknowledgment is enabled and there is multisite clustering or minimum rf is not met due to indexer failure. If I understand it right, no acknowledgement will be sent till da... See more...
What happens if indexer acknowledgment is enabled and there is multisite clustering or minimum rf is not met due to indexer failure. If I understand it right, no acknowledgement will be sent till data has been replicated as per rf value and in such scenario UF will hold the data block in its queue. Would ingestion stall when queue is full (it's default size is only few MBs) even though data ingestion is working on another site. And if so, ingestion would completely stop even when ingestion is working fine only replication is impacted. I am looking for below journeys: UK --> Intermediate Heavy Forwarder --> Indexer pcf microservices --> Intermediate Heavy Forwarder --> Indexer
Hi, I hope all is well. I want to ask for more information and simple explanation, as i came across the Distributed Search document and came out with two questions:   what is the search artifacts?... See more...
Hi, I hope all is well. I want to ask for more information and simple explanation, as i came across the Distributed Search document and came out with two questions:   what is the search artifacts? - it's the search result, and should be replicated but what is the content of that? what is the the knowledge bundle? - a set of configuration which directly transfer from the Search heads to the search peers, but why?   Thanks in advance!
Hoping to find a solution here for my rex query (new to rex)   I have an event that looks like this   time="2024-06-22T00:31:43.939620127Z" level=info msg="uuid="KENT-12345678-1234-1234-1234-1234... See more...
Hoping to find a solution here for my rex query (new to rex)   I have an event that looks like this   time="2024-06-22T00:31:43.939620127Z" level=info msg="uuid="KENT-12345678-1234-1234-1234-123456789123", tkt=INC123456789, ci=SAP, state=Escalate, opened=2024-06-22 00:31:06, closed=0001-01-01 00:00:00 +0000 UTC, title=server123.corp: userTrap: 1 ABC Job: 927370523:ABC_0001_DEF_XX_GHIJK_XXXX2_MOSU:Killed"   How do I write a query that will extract this string "server123.corp: userTrap: 1 ABC Job: 927370523:ABC_0001_DEF_XX_GHIJK_XXXX2_MOSU:Killed"   Thank you very much