All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is th... See more...
Hi Team, I have an auto-extracted field - auth.policies{} I have another field called user  Whenever auth.policies{} is root, I need that to be a part of user field May I know how to do it? Is there a possibility to use case and coalesce together?
Thank you @phanTom  Sure I will research, I am new to phantom, your help much appreciated. Regards, Harisha
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the c... See more...
We currently have a Splunk Enterprise cluster that uses SmartStore in AWS S3. We're looking to move the cluster to an entirely new AWS account.  However, we are not sure of the best way to move the contents of the SmartStore without corrupting any of the files that have been indexed.  What would be the best way to migrate from one SmartStore backend to another SmartStore backend without losing any data? 
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this ca... See more...
Hi all, we have around 8 dashboards fetching data from same index.  There are around 150 hosts with this index, but we don't want to see the data from particular 50 hosts in a dashboard.  how this can be done??? Any inputs on this please???  
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode H... See more...
Logs are in JSON format and we want to get this attribute.app.servicecode field values as a drop down in classic dashboard. Query: index=application-idx  |stats count by attribute.app.servicecode How to get this field values in a drop down????
Hi Sorry for the long shot.  Yes, I did find a way to make it work for a while. Just get the refresh token in and it will automatically work until that you have to tune it.       
{"cluster_id":"uscentral","kubernetes":{"host":"worker-uscentral","labels":{"app":"service","version":"v1"},"namespace_name":"om","pod_name":"service-"},"log":{"config":{"headers":{"Accept":"applicat... See more...
{"cluster_id":"uscentral","kubernetes":{"host":"worker-uscentral","labels":{"app":"service","version":"v1"},"namespace_name":"om","pod_name":"service-"},"log":{"config":{"headers":{"Accept":"application/json","Accept-Encoding":"gzip, deflate","Content-Type":"application/json","WM_CONSUMER.ID":"222222-d1bf-4141-a275-2232323232","WM_SVC.ENV":"prod","WM_SVC.NAME":"SERVICEHISTORY","cookie":"a36e-5b4c7c82f05b","wm_qos.correlation_id":"d40031a5271c463d-00","x-o-correlation-id":"00-c0","x-user":"OMN"},"searchParams":{"keyWag":"846","storeNumber":"xx"}},"content":"[{\"status\":\"SERVICE_COMPLETE\",\"OId\":\"eb6fc890-526cb1b\",\"vehicle\":{\"vehicleId\":\"54c4b31\",\"vin\":\"2c3cdxbg8jh255808\",\"year\":\"2018\",\"make\":\"DODGE\",\"model\":\"CHARGER\",\"licensePlate\":\"777\",\"licensePlateState\":\"texas\",\"documentType\":\"vehicle\",\"sourceUpdateRequired\":false,\"isDually\":false},\"customer\":{\"id\":null,\"wmtProfileId\":\"-a0b4-447f-bd99-\",\"communicationConsent\":null,\"firstName\":\"xxx\",\"lastName\":\"xxx\",\"phoneNumber\":\"1111111\",\"countryCode\":1,\"email\":null,\"address\":{\"address1\":\"222 Lane\",\"address2\":null,\"city\":\"long\",\"state\":\"Texas\",\"zip\":\"77535\",\"country\":\"United States\"},\"createdBy\":null,\"dateCreated\":null,\"updatedBy\":null,\"dateUpdated\":null,\"cId\":\"-923a-4c076501f8b0b\",\"documentType\":\"customer\",\"ccpaOptOut\":false,\"legacyIds\":null,\"blockingKey\":null,\"similarRecords\":null,\"createForced\":false,\"linkSource\":null,\"recordSource\":null},\"odometer\":{\"value\":5,\"unit\":\"MILE\"},\"keyTag\":\"777\",\"serviceCart\":{\"serviceItems\":[{\"id\":\"5a92-97304651e9fe\",\"iteId\":\"370122\",\"name\":\"High mileage featured\",\"upc\":\"999\",\"quantity\":0,\"serviceType\":\"OIL_AND_LUBE\",\"components\":[{\"componentType\":\"OIL\",\"product\":{\"itId\":\"99\",\"upc\":\"00071611012225\",\"name\":\"Pennzoil High Mileage 5W20 Motor Oil Ecobox, 1 Quart -\",\"quantity\":5.900,\"retailPrice\":20.72,\"cusredit\":0,\"includedQuantity\":5,\"attributes\":[{\"key\":\"brand\",\"value\":\"Pennzoil\"}]},\"configurations\":[]},{\"componentType\":\"OIL_FILTER\",\"product\":{\"itemId\":\"100992364\",\"upc\":\"00060223\",\"name\":\"FRAM Core 11665 Oil Filter - Offer Valid for In-store Oil Change Only Fits select: 2014-2018 JEEP, 2015-2019 JEEP UNLIMITED\",\"quantity\":1,\"retailPrice\":1.52,\"customerCredit\":0,\"includedQuantity\":1,\"attributes\":[{\"key\":\"brand\",\"value\":\"FRAM\"}]},\"configurations\":[]}],\"retailPrice\":44.88,\"laborCost\":29.76,\"ifNeeded\":false,\"serviceIfNecessary\":false}],\"total\":47.33},\"storeNumber\":\"744\",\"creationDate\":\"2023-05-21T22:53:49.915774Z\",\"documentType\":\"wr\",\"amountPaid\":0,\"markDowns\":[],\"isNewOrder\":true,\"isCarryInOrder\":false,\"isCarryOutOrder\":false,\"isFulfillmentInNextGenBay\":true,\"isVehicleDamageCapturedInSW\":false}]","context":{"duration":"-","level":1,"parentTxId":"00-cb4ec2ec5795a1a7a11-d40031a5271c463d-00","sessionId":"a08f812f-012-18f9d848c23b26","topTxId":"a08f812f-8f9d848c238d4","txId":"a09d848c23775"},"event":"UPSTREAM RESPONSE - GET_ACTIVE_SERVICE_DETAILS","level":30,"msg":"","name":"stdout","oo_env":"prod","reqId":"00-cb4ec2d7e2f031a5271c463d-00","resolverInfo":{},"tags":["info"],"tenant":"US-B2C-undefined","time":1716334661094,"timings":{"dns":2,"download":0,"firstByte":417,"request":0,"tcp":0,"tls":15,"total":434,"wait":0},"type":"INFO","url":"https://localhost/active"},"time":"2024-05-21T23:37:41.094957544Z"}
https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent ... See more...
https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent queue at inputs,  enabled on certain UF/HF/IHF/IUF inputs, if downstream parsingqueue/indexqueue/tcpoutqueue are blocked/saturated and a DS bundle push triggers splunk restart, events will be dropped since UF/HF/IHF/IUF failed to drain queues. On windows DC, persistent queuing is enabled for windows modular inputs, DS bundle push triggers DC restart and still windows modular input events in parsingqueue/tcpoutqueue will be dropped. On windows DC, some windows event (event occurred while the workstation was being shut down ) logs are always lost. When Laptops are off the network and restarted/shutdown, in-memory queue events are dropped.  With PQ at inputs, during splunk restart on forwarding tier, still splunk in-memory queued events might get dropped.  Typical steps for laptop where events are always lost. 1. Splunk is installed on a Windows Laptop 2. Put the laptop to Sleep 3. The Splunk service will stop, then 4. There will be 1 or 2 Windows events such as 4634-Session_Destroyed. 5. Later the Laptop "wakes up" and there will be 1 or 2 events generated such as 4624-Login 6. Then Splunk service start. 7. The events that were created when sleep started and when sleep ended were not ingested.
Hi @av_ - this is not currently supported but we have heard this feedback before and have it prioritized on our roadmap to address. Stay tuned for updates! 
Hi. We have Splunk case 3421789 opened for this bug. For us it is installing from rpm that fails. It is not an option to install from tar.    
Time only displays well when it is used as the x-axis on charts.
Rather than sharing a picture of the event (which is not a lot of use), please could you share the raw event in a code block </> (anonymised of course)
What if I wanted a different one for each app? So if I put a alert_actions.conf in each app then each app could have different email parameters right?
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from cont... See more...
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from content. I've tried "spath path=log.content{}" and "spath path=log.content{}.status but it ends up empty. I want to be able to do a ternary operation on "status" like the sample below: | mvexpand log.content{}.status | eval Service=if('log.content{}.status'="CANCELLED", "Cancelled", if('log.content{}.status'="BAY", "Bay", NULL)) | where isnotnull(Service) | stats count by Service  
Hi @woodcock @sideview  Is there a way i can use sendemail to results of field values.  Details: i have a search results that has set of columns with user and email with different columns. Requirem... See more...
Hi @woodcock @sideview  Is there a way i can use sendemail to results of field values.  Details: i have a search results that has set of columns with user and email with different columns. Requirement is when a alert is triggered then results includes with columns: user, user_email, subject, recipient, owner, owner_email then sendemail to ower_email from the values of a results then sends email to ownerofuser1234@email.com, ownerofuser345@email.com, ownerofuser567@email.com.   So, all fields are dynamic and list is large.  eg:  An alert triggered from a search and following are the results of alert. user user_email subject recipient owner owner_email user1234 user1234@email.com cross section from alert 1 recipientname1 ownerofuser1234 ownerofuser1234@email.com user345 user345@email.com cross section from alert 2 recipientname2 ownerofuser345 ownerofuser345@email.com user567 user567@email.com cross section from alert 4 recipientname3 ownerofuser567 ownerofuser567@email.com   Thanks in advance!  
Having a similar issue -- receiving the same error messages with connecting to my third-party service but when trying in the app view screen, testing actions works.. When working from the events/arti... See more...
Having a similar issue -- receiving the same error messages with connecting to my third-party service but when trying in the app view screen, testing actions works.. When working from the events/artifacts view I don't get any successful runs.
Hi Every1, Need help on how to integrate solarwinds to splunk cloud  or splunk enterprise ? As I seen addon is not support by splunk support. Suggest best possible ways !!
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. ... See more...
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. my search that looks like this: Index=a sourcetype=b earliest=-1d [| inputlookup M003_siem_ass_list where FMA_id=*OS -001* | stats values(ass) as search | eval seaqqrch=mvjoin(search,", OR ")] | fields ip FMA_id _time d_role | stats latest(_time) as _time values(*) by ip
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we c... See more...
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we can encounter some problems, like the Orphaned object one. So the best pratice is to create a service user and assign it to KO. Fine. My wondering is: suppose we have many scheduled correlation searches, for example more than 100 and 200. Assign all those searches to one single service user is fine, or is better to create multiple one, so to avoid some performance experience? The question is made based on a case some colleagues shared with me once: due there were some problems with search lag/skipped searches, in addiction to fix searches scheduler, involved people splitted their ownership to multiple users. Is that useful or not?
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer b... See more...
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer but I want to upgrade it via deployer what is the best practice and method to achieve this  Here is my plan , please correct me if I am thinking wrong  Step 1) First I will copy the installed folder from one of the SHC member to deployer under /etc/app so that it install itself on the deployer and then I can manually upgrade it using deployer GUI Step2) Once upgraded , I will copy upgraded app from /etc/apps folder to /etc/shcluster/apps folder  Step3) run apply shcluster-bundle on the deployer to push the upgraded app to SHC members . Do you think above is the right approach ? if not what else I can do