All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good Afternoon, Currently, I'm submitting this message for help in regards to editing the font color for all labels introduced within a Pie chart via a created panel within Splunk Studio. Is there ... See more...
Good Afternoon, Currently, I'm submitting this message for help in regards to editing the font color for all labels introduced within a Pie chart via a created panel within Splunk Studio. Is there a method of changing the font color? I'm looking through the documentation and found a URL link for all the possible source commands to be utilized within the Pie chart. One command in particular is called seriesColors. I'm still fairly new to Splunk so I do not have any acquired expertise for editing pie charts here. Thank you
I'm trying to understand the API usage - Internal and Public, basic auth vs token based -  in our controllers so they are appropriately sized and there is no performance bottle necks, how do I get th... See more...
I'm trying to understand the API usage - Internal and Public, basic auth vs token based -  in our controllers so they are appropriately sized and there is no performance bottle necks, how do I get these stats? I want to filter out the Internal API volume from Public and explore the possibility of moving some of these APIs to APIM. Also, Is it possible to move all the external/public api's to a different port to manage them better?
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for exa... See more...
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for example to manage role permissions for it.  This was helpful to show the index in the role definition. However in this deployment there is no such indexes.conf file where index_a is defined on the SH, but the index still shows up in the configuration UI. All instances have Splunk Enterprise 9.0.5.1 installed Problem: I have a new Index that I defined after index_a. It is called index_b. index_b doesn't show up in the roles definition for some reason.  What I tried: I looked up the name of index_a in the config files of the searchhead. The only appearance is in system/local/authorize.conf. I also compared the index definitions on the CM including file permission settings. The two configurations only differ in index name and app. I also set up a test environment with one indexer and one searchhead. I created one index on the IX and it appeared on the SH role definition some time later without me configuring anything. Again I verified if the name of the index appears anyway in the SHs configs, but it didn't. Question: Is there a new feature which makes the mock definitions in the SH obsolete? I am aware that I can solve this with this approach but it appears to be a nicer way to do it like it is done with index_a
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is ... See more...
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is not working from the outer search. But when I write out the value of that outside field, it is recognized.   | makeresults | eval ips="a,c,x" | map [ | makeresults | append [ makeresults | eval ips="a", label="aaa" ] | append [ makeresults | eval ips="b", label="bbb" ] | append [ makeresults | eval ips="c", label="ccc" ] | append [ makeresults | eval ips="d", label="ddd" ] ```| search ips IN ($ips$)``` ```NOT WORKING``` | search ips IN (a,c,x) ```WORKING``` | eval outer_ips=$ips$ ] maxsearches=10    
Hello, I am trying to integrate chatgpt with my dashboard and I am using OpenAPI add on. I am getting the following error code: "HTTP 404 Not Found -- Could not find object id=TA-openai-api:org_i... See more...
Hello, I am trying to integrate chatgpt with my dashboard and I am using OpenAPI add on. I am getting the following error code: "HTTP 404 Not Found -- Could not find object id=TA-openai-api:org_id_default: ERROR cannot unpack non-iterable NoneType object"   Can anyone help me with this?
Hi Guys,   In Splunk a field by name “event_sub_type” has multiple values. We don’t want to ingest any logs into splunk whose field “event_sub_type” value is either “WAN Firewall” or “TLS” (as mark... See more...
Hi Guys,   In Splunk a field by name “event_sub_type” has multiple values. We don’t want to ingest any logs into splunk whose field “event_sub_type” value is either “WAN Firewall” or “TLS” (as marked in attached screen shot) as these are huge unwanted logs.     Our search query is : index=cato sourcetype=cato_source   We tried multiple ways by editing the props.conf and transforms.conf to exclude these logs as below but none of them are successful to exclude those logs;   props.conf [sourcetype::cato_source] TRANSFORMS-filter_logs = cloudparsing   transforms.conf [cloudparsing] REGEX = \"event_sub_type\":\"(WAN Firewall|TLS)\" DEST_KEY = queue FORMAT = nullQueue   Can someone please guide how to exclude these events whose “event_sub_type” value contains either “WAN Firewall” or “TLS” by editing props.conf and transforms.conf?     RAW Events for reference which needs to be excluded ; 1. event_sub_type":"WAN   {"event_count":1,"ISP_name":"Shanghai internet","rule":"Initial Connectivity Rule","dest_is_site_or_vpn":"Site","src_isp_ip":"0.0.0.0","time_str":"2023-11-28T04:27:40Z","src_site":"CHINA-AZURE-E2","src_ip":"0.0.0.1","internalId":"54464646","dest_site_name":"china_112,"event_type":"Security","src_country_code":"CN","action":"Monitor","subnet_name":"cn-001.net-vnet-1","pop_name":"Shanghai_1","dest_port":443,"dest_site":"china_connect","rule_name":"Initial Connectivity Rule","event_sub_type":"WAN Firewall","insertionDate":1701188916690,"ip_protocol":"TCP","rule_id":"101238","src_is_site_or_vpn":"Site","account_id":5555,"application":"HTTP(S)","src_site_name":"china_connect","src_country":"China","dest_ip":"0.0.0.0","os_type":"OS_ANDROID","app_stack""TCP","TLS","HTTP(S)"],"time":1701188860834}   2. "event_sub_type":"TLS","   {"event_count":4,"http_host_name":"isp.vpn","ISP_name":"China_internet","src_isp_ip":"0.0.0.0","tls_version":"TLSv1.3","time_str":"2023-11-28T04:27:16Z","src_site":"china_mtt","src_ip":"0.0.0.0","internalId":"rtrgrtr","domain_name":"china.gh.com","event_type":"Security","src_country_code":"CN","tls_error_description":"unknown CA","action":"Alert","subnet_name":"0.0.0.0/24","pop_name":"china_1","dest_port":443,"event_sub_type":"TLS","insertionDate":1701188915580,"dest_country_code":"SG","tls_error_type":"fatal","dns_name":"china.com","traffic_direction":"OUTBOUND","src_is_site_or_vpn":"Site","account_id":56565,"application":"Netskope","src_site_name":"CHINA-44","src_country":"China","dest_ip":"0.0.0.0","os_type":"OS_WINDOWS","time":1701188836011,"dest_country":"Singapore"}    
I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message... See more...
I have a log like below displayed in SPlunk UI. I want the "message" key to be parsed into json as well. how to do that? The below is the raw text.       {"stream":"stderr","logtag":"F","message":"{\"Context\":{\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\"},\"LogCollection\":[{\"source\":\"handler.go:44\",\"timestamp\":\"2023-11-30T15:01:07.209285695Z\",\"msg\":{\"specversion\":\"1.0\",\"type\":\"com.cnc.caas.documentgenerationservices.documentgeneration.completed.public\",\"source\":\"/events/caas/documentgenerationservices/record/documentgeneration\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"time\":\"2023-11-30T15:01:06.972071059Z\",\"subject\":\"record-documentgenerationservices-wip\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"datacontenttype\":\"application/json\",\"data\":{\"CAAS\":{\"Event\":{\"Version\":\"2.0.0\",\"EventType\":\"documentgeneration.completed\",\"LifeCycleStatus\":\"wip\",\"EventSequence\":4,\"OriginTimeStamp\":\"2023-11-30T15:01:06.972Z\",\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RequestedDeliveryChannel\":\"Print\",\"RecordedDeliveryChannel\":\"Print\",\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"Priority\":false,\"EventFailedStatus\":0,\"RetryCount\":1,\"Errors\":null,\"OriginalSqsMessage\":{\"data\":{\"CAAS\":{\"Event\":{\"AdditionalData\":{\"CompositionAttributes\":{\"IsOCOENotificationRequired\":true,\"JobID\":47130}},\"CaseID\":\"UMV-UMV_OK_CAAS_MMR_Mokcup_PIPE_2023-11-28-151036894\",\"CommunicationGroupID\":\"mbrmatreqok\",\"CommunicationID\":\"UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f\",\"Errors\":null,\"EventFailedStatus\":0,\"EventSequence\":4,\"EventType\":\"recordcomposition.response.start\",\"LifeCycleStatus\":\"wip\",\"OriginTimeStamp\":\"2023-11-30T15:00:04.996Z\",\"PreRendered\":false,\"Priority\":false,\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"RecordID\":\"332ebe12-0269-4ae6-90fc-98c8887e3703\",\"RecordedDeliveryChannel\":\"Print\",\"RequestID\":\"4ea2b9be-752b-4e6f-8972-0c435d1ad282\",\"RequestedDeliveryChannel\":\"Print\",\"RetryCount\":1,\"S3Location\":{\"BucketName\":\"cnc-caas-csl-dev-smartcomm-output\",\"ObjectKey\":\"output/4ea2b9be-752b-4e6f-8972-0c435d1ad282/47130/4ea2b9be-752b-4e6f-8972-0c435d1ad282_332ebe12-0269-4ae6-90fc-98c8887e3703_UMV-64b9c2a9-be74-4ec6-9fd0-f545c1dd890f_Payload.json\"},\"SourceName\":\"UMV\",\"SourceTransactionID\":\"UMV-626036c8-b843-46e8-8ef3-0bd78376bf93\",\"Version\":\"2.0.0\"}}},\"datacontenttype\":\"application/json\",\"dataschema\":\"/caas/comp_01_a_events-spec.json\",\"id\":\"Rec#332ebe12-0269-4ae6-90fc-98c8887e3703\",\"source\":\"/events/caas/smart/record/composition\",\"specversion\":\"1.0\",\"subject\":\"record-composition-response-start\",\"time\":\"2023-11-30T15:01:05.756937686Z\",\"type\":\"com.cnc.caas.composition.response.start.private\"},\"CommunicationGroupID\":\"mbrmatreqok\",\"RecipientID\":\"68032561\",\"RecipientType\":\"Member\",\"PreRendered\":false}}}}},{\"source\":\"handler.go:46\",\"timestamp\":\"2023-11-30T15:01:07.21572506Z\",\"msg\":\"mongo insert id is 6568a3b3ab042d54478ef071\"}],\"RetryCount\":1,\"level\":\"error\",\"msg\":\"Log collector output\",\"time\":\"2023-11-30T15:01:07Z\"}","kubernetes":{"pod_name":"eventsupdatetomongo-d98bb8594-cnbsd","namespace_name":"caas-composition-layer","pod_id":"50d49842-793a-41c8-a903-11c23607dfd6","labels":{"app":"eventsupdatetomongo","pod-template-hash":"d98bb8594","version":"dcode-801-1.0.1-2745653"},"annotations":{"cattle.io/timestamp":"2023-06-08T22:30:33Z","cni.projectcalico.org/containerID":"58cf3b42ab43fac0a5bf1f97e5a4a7db9dbf6a572705f02480384e63c2a53288","cni.projectcalico.org/podIP":"172.17.224.31/32","cni.projectcalico.org/podIPs":"172.17.224.31/32","kubectl.kubernetes.io/restartedAt":"2023-11-20T17:28:31Z"},"host":"ip-10-168-125-122.ec2.internal","container_name":"eventsupdatetomongo","docker_id":"c83dd87422fbdcae60a40ac50bcad0f387d50f3021975b81dbccac1bc0d965b2","container_hash":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo@sha256:3b7e5e0908cec3f68baa7f9be18397b6ce4aa807f92b98b6b8970edac9780388","container_image":"artifactory-aws.centene.com/caas-docker_non-production_local_aws/eventsupdatetomongo:dcode-801-1.0.1-2745653"}}      
Hi, I'm new to Splunk and wanted to change the time zone of my Splunk cloud deployment. As of now in my Cloud Monitoring app, it shows UTC, I want MTC (Denver). so far my understanding is to chang... See more...
Hi, I'm new to Splunk and wanted to change the time zone of my Splunk cloud deployment. As of now in my Cloud Monitoring app, it shows UTC, I want MTC (Denver). so far my understanding is to change the timezone in Splunk>preferences>timezone. wanted to make sure that's the right approach Since I don't want to tweak the whole deployment.  Thank you!! @isoutamo  @gcusello 
I am trying to build my own kvstore geo data, so far i can run | inputlookup geobeta | where endIPNum >= 1317914622 and startIPNum <= 1317914622 | table latitude,longitude That returns: latitude,... See more...
I am trying to build my own kvstore geo data, so far i can run | inputlookup geobeta | where endIPNum >= 1317914622 and startIPNum <= 1317914622 | table latitude,longitude That returns: latitude,longitude "51.5128","-0.0638" But how do i combine this with a search? I was trying this but it doesn't work: | makeresults eval ip_address_integer=1317914622 [ | inputlookup geobeta | where endIPNum >= ip_address_integer and startIPNum <= ip_address_integer ]   Many thanks for hints
Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list... See more...
Hi All, So I've created the logic below to decode base64. Other discussions on this topic give possible solutions but only work when what has been encoded is smaller in size because of use of list in their stats command. My Logic Looks like this:     | eval time=_time | appendpipe [ | eval converts=split(encoded,"") | mvexpand converts | lookup base64conversion.csv index as converts OUTPUT value as base64bin | table encoded, base64bin, time | mvcombine base64bin | eval combined=mvjoin(base64bin,"") | rex field=combined "(?<asciibin>.{8})" max_match=0 | mvexpand asciibin | lookup base64conversion.csv index as asciibin OUTPUT value as outputs | table encoded, outputs, time | mvcombine outputs | eval decoded=mvjoin(outputs,"") | table encoded, decoded, time ] | selfjoin time     And looks like this in a test environment: This is partially taken from other people's work but so some of it may be familiar to other discussions. My issue is when put into a larger search, it doesn't work for all values, especially the seemingly longer ones. I can't show it in action unfortunately but if you have a number of encoded commands to run it against it will only do it for one of them. I thought this might be because the self join for time is not entirely unique but I'm starting to think it's because I'm not using a stats command before the appendpipe to group by encoded, even when I do that though it still doesn't work. The lookup I'm using is based on the one discussed here: https://community.splunk.com/t5/Splunk-Search/base64-decoding-in-search/m-p/27572 At this point I will likely just install an app if no one can resolve this. I thought I'd ask to get other people's points of view, any help would be much appreciated.
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the addr... See more...
I am trying to set up POC for Splunk indexing and the manager node is up, but runs on an HTTP link (Certificate is not there yet) instead of HTTPS. While configuring the peer when I provide the address of master node, I am getting the below error:   Is there a way to bypass this or create a dummy certificate for Splunk?
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all... See more...
I am a learner who wants to create a Splunk server where I can retrieve information from a business server that I work at.  To securely protect the firm I wonder if I can connect to a server that all of the pc's are connected to and retrieve the information going out and going in.  Thus I can supervise the information and see if there are anything out of ordinary. If possiAny help is appreciated :).
The Splunkd logs are sending me the messages listed below. Three days later, the alerts reappear once Splunkd has restarted. However, I've since made some adjustments to indexes.conf and added two at... See more...
The Splunkd logs are sending me the messages listed below. Three days later, the alerts reappear once Splunkd has restarted. However, I've since made some adjustments to indexes.conf and added two attributes. maxHotBuckets = 5 minHotIdleSecsBeforeForceRoll = auto   Please advise if both settings are sufficient to permanently remove the information messages. 11-04-2023 15:40:09.545 +0100 INFO HotBucketRoller - finished moving hot to warm bid=asr~308~34353497-7F2F-41CB-B772-DAF7007EA623 idx=abs from=hot_v1_308 to=db_1698249739_1698190953_308 size=786313216 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 11-03-2023 22:07:29.511 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_internal~379~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_379 to=db_1698211695_1698040811_379 size=1048535040 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 11-01-2023 07:31:25.596 +0100 INFO HotBucketRoller - finished moving hot to warm bid=_audit~69~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_audit from=hot_v1_69 to=db_1696240764_1695536757_69 size=786419712 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-31-2023 19:58:48.033 +0100 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~140~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_140 to=db_1696974841_1696841261_140 size=786358272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-31-2023 17:23:48.700 +0100 INFO HotBucketRoller - finished moving hot to warm bid=asr~303~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_303 to=db_1697800494_1697727845_303 size=785281024 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-29-2023 00:03:30.635 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~376~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_376 to=db_1673823600_1673823600_376 size=40960 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-27-2023 12:24:16.567 +0200 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~138~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_138 to=db_1696587710_1696461161_138 size=786423808 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-25-2023 07:28:42.146 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~374~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_374 to=db_1697476202_1697263512_374 size=1048510464 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-24-2023 06:36:55.716 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~293~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_293 to=db_1697038969_1696983723_293 size=786386944 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-20-2023 13:15:13.165 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~286~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_286 to=db_1696492029_1696421708_286 size=785948672 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-17-2023 08:50:44.494 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~373~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_373 to=db_1697263511_1697083171_373 size=1048502272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-16-2023 19:10:28.534 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~372~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_372 to=db_1697083169_1696908238_372 size=1048461312 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-15-2023 18:10:43.940 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_introspection~230~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_introspection from=hot_v1_230 to=db_1683689783_1619379864_230 size=413696 caller=lru maxHotBuckets=3, count=3 hot buckets + 1 quar bucket,evicting_count=1 LRU hots 10-14-2023 21:26:48.653 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_audit~67~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_audit from=hot_v1_67 to=db_1694945963_1694438187_67 size=786403328 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-14-2023 08:06:09.886 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~369~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_369 to=db_1696504588_1696317607_369 size=1047363584 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-14-2023 05:02:31.677 +0200 INFO HotBucketRoller - finished moving hot to warm bid=wmc~44~34353497-7F2F-41CB-B772-DAF7007EA623 idx=www from=hot_v1_44 to=db_1695949104_1695348831_44 size=786358272 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-12-2023 05:59:51.941 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~367~34353497-7F2F-41CB-B772-DAF7007EA623 idx=_internal from=hot_v1_367 to=db_1696102911_1695901400_367 size=1048420352 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-11-2023 17:43:09.179 +0200 INFO HotBucketRoller - finished moving hot to warm bid=asr~284~34353497-7F2F-41CB-B772-DAF7007EA623 idx=adr from=hot_v1_284 to=db_1696364124_1696299722_284 size=786280448 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots 10-10-2023 23:54:56.050 +0200 INFO HotBucketRoller - finished moving hot to warm bid=messagebus~135~34353497-7F2F-41CB-B772-DAF7007EA623 idx=melod from=hot_v1_135 to=db_1696039435_1695914107_135 size=786350080 caller=lru maxHotBuckets=3, count=4 hot buckets,evicting_count=1 LRU hots  
Client is asking about Splunk Cloud backup and recovery procedure for DR. Specifically all the configuration, searched, dashboards, fields, tag so on and so on. I can not find a document outlining Sp... See more...
Client is asking about Splunk Cloud backup and recovery procedure for DR. Specifically all the configuration, searched, dashboards, fields, tag so on and so on. I can not find a document outlining Splunk cloud polices for high availability, backup and restore can anyone point to this info?     Client ask -  "Could you please check and let me know how and where following items are backed up and what is the process to recover them for DR purpose? Audit logs Usecases Reports, alerts, lookup tables, KV etc Config data Source type config Parsing API, TI Fields config Data model, macros Apps and app config ES config Threat intel config"
I have some data where I want to write the values of "test_n" (n in 1,2,...20) into a multivalue field and keep the  numeric order. My attempt is to create the fields in a subsearch and pass to "mvap... See more...
I have some data where I want to write the values of "test_n" (n in 1,2,...20) into a multivalue field and keep the  numeric order. My attempt is to create the fields in a subsearch and pass to "mvapend()". This does not work.    | makeresults count=20 | streamstats count | eval test_{count}=count | stats first(test*) AS test* | eval x=mvappend([| makeresults count=20 | streamstats count AS count | eval field_names="test".count | stats list(field_names) AS field_names | nomv field_names | eval field_names=replace(field_names," ",", ") |return $field_names])    Is there any alternative to spelling out:   | eval x=mvappend(test_1,...test_20)   by hand?
Hi, We have 4 Mission Critical MQ servers that have had a more than doubling of the number of queues added that need to be monitored by the MQ Extension. This means the currently configured metrics ... See more...
Hi, We have 4 Mission Critical MQ servers that have had a more than doubling of the number of queues added that need to be monitored by the MQ Extension. This means the currently configured metrics limit of 3000 is insufficient.  We have added additional resources to all 4 servers (i.e. CPU and memory) and want to increase the agent metrics limit to ca 8-10k.  Q: What increase in agent memory do we need to safely handle this increase with at least 20-30% buffer headroom? Thanks
I have list of region in one input.dropdown based on the region selection need to populate the servers in another input.dropdown in the same glass table using search based inputs on both input.dropdo... See more...
I have list of region in one input.dropdown based on the region selection need to populate the servers in another input.dropdown in the same glass table using search based inputs on both input.dropdown.
Hi, We need to upgrade our Splunk Enterprise from version 9.0.0 to 9.0.7 on the Deployment Server. Can someone please provide me with the steps required to perform this upgrade? I also need guidanc... See more...
Hi, We need to upgrade our Splunk Enterprise from version 9.0.0 to 9.0.7 on the Deployment Server. Can someone please provide me with the steps required to perform this upgrade? I also need guidance on what needs to be backed up before executing this upgrade. Additionally, could you provide an estimation of the time required to complete this upgrade process? what about the time to complete these upgrade ?
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/boo... See more...
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/book /api/nehru/book /api/cvraman/collections /api/apj/collections /api/indira/collections /api/rahul/notes /api/rajiv/notes /api/modi/notes Now i will check for the average of the API /api/*/book,/api/*/collections,/api/*/notes. Dashboard should have only these response times in the chart /api/*/book,/api/*/collections,/api/*/notes. i tried the below query but the dashboard shows the combined average on all the three can someone please help on this index=your_index (URI = /api/*/book OR URI = /api/*/collections OR /api/*/notes. ) |stats avg(duration) as avg_time  
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoyi... See more...
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoying.   Thanks in advance.