All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent ... See more...
https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Usepersistentqueues Persistent queuing is available for certain types of inputs, but not all. One major limitation with persistent queue at inputs,  enabled on certain UF/HF/IHF/IUF inputs, if downstream parsingqueue/indexqueue/tcpoutqueue are blocked/saturated and a DS bundle push triggers splunk restart, events will be dropped since UF/HF/IHF/IUF failed to drain queues. On windows DC, persistent queuing is enabled for windows modular inputs, DS bundle push triggers DC restart and still windows modular input events in parsingqueue/tcpoutqueue will be dropped. On windows DC, some windows event (event occurred while the workstation was being shut down ) logs are always lost. When Laptops are off the network and restarted/shutdown, in-memory queue events are dropped.  With PQ at inputs, during splunk restart on forwarding tier, still splunk in-memory queued events might get dropped.  Typical steps for laptop where events are always lost. 1. Splunk is installed on a Windows Laptop 2. Put the laptop to Sleep 3. The Splunk service will stop, then 4. There will be 1 or 2 Windows events such as 4634-Session_Destroyed. 5. Later the Laptop "wakes up" and there will be 1 or 2 events generated such as 4624-Login 6. Then Splunk service start. 7. The events that were created when sleep started and when sleep ended were not ingested.
Hi @av_ - this is not currently supported but we have heard this feedback before and have it prioritized on our roadmap to address. Stay tuned for updates! 
Hi. We have Splunk case 3421789 opened for this bug. For us it is installing from rpm that fails. It is not an option to install from tar.    
Time only displays well when it is used as the x-axis on charts.
Rather than sharing a picture of the event (which is not a lot of use), please could you share the raw event in a code block </> (anonymised of course)
What if I wanted a different one for each app? So if I put a alert_actions.conf in each app then each app could have different email parameters right?
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from cont... See more...
I want to do some analysis on "status" below but having a hard time getting to "status". I start with: | spath path=log.content | table log.content but that only gives me the json array from content. I've tried "spath path=log.content{}" and "spath path=log.content{}.status but it ends up empty. I want to be able to do a ternary operation on "status" like the sample below: | mvexpand log.content{}.status | eval Service=if('log.content{}.status'="CANCELLED", "Cancelled", if('log.content{}.status'="BAY", "Bay", NULL)) | where isnotnull(Service) | stats count by Service  
Hi @woodcock @sideview  Is there a way i can use sendemail to results of field values.  Details: i have a search results that has set of columns with user and email with different columns. Requirem... See more...
Hi @woodcock @sideview  Is there a way i can use sendemail to results of field values.  Details: i have a search results that has set of columns with user and email with different columns. Requirement is when a alert is triggered then results includes with columns: user, user_email, subject, recipient, owner, owner_email then sendemail to ower_email from the values of a results then sends email to ownerofuser1234@email.com, ownerofuser345@email.com, ownerofuser567@email.com.   So, all fields are dynamic and list is large.  eg:  An alert triggered from a search and following are the results of alert. user user_email subject recipient owner owner_email user1234 user1234@email.com cross section from alert 1 recipientname1 ownerofuser1234 ownerofuser1234@email.com user345 user345@email.com cross section from alert 2 recipientname2 ownerofuser345 ownerofuser345@email.com user567 user567@email.com cross section from alert 4 recipientname3 ownerofuser567 ownerofuser567@email.com   Thanks in advance!  
Having a similar issue -- receiving the same error messages with connecting to my third-party service but when trying in the app view screen, testing actions works.. When working from the events/arti... See more...
Having a similar issue -- receiving the same error messages with connecting to my third-party service but when trying in the app view screen, testing actions works.. When working from the events/artifacts view I don't get any successful runs.
Hi Every1, Need help on how to integrate solarwinds to splunk cloud  or splunk enterprise ? As I seen addon is not support by splunk support. Suggest best possible ways !!
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. ... See more...
Hi, I want to display time on my dashboard but all I see just two fields with data any help with the search to populate the rest of the fields would be appreciated. I have attached my dashboard. my search that looks like this: Index=a sourcetype=b earliest=-1d [| inputlookup M003_siem_ass_list where FMA_id=*OS -001* | stats values(ass) as search | eval seaqqrch=mvjoin(search,", OR ")] | fields ip FMA_id _time d_role | stats latest(_time) as _time values(*) by ip
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we c... See more...
Hi Splunkers, I have a doubt about users that run scheduled searches. Until now, I now very well that, if a user own a knowledge object like a correlation searches, when it is deleted/disabled, we can encounter some problems, like the Orphaned object one. So the best pratice is to create a service user and assign it to KO. Fine. My wondering is: suppose we have many scheduled correlation searches, for example more than 100 and 200. Assign all those searches to one single service user is fine, or is better to create multiple one, so to avoid some performance experience? The question is made based on a case some colleagues shared with me once: due there were some problems with search lag/skipped searches, in addiction to fix searches scheduler, involved people splitted their ownership to multiple users. Is that useful or not?
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer b... See more...
Hi Experts ,  Someone has installed ESCU app directly on the Search head members . Now I am upgrading this app to a newer release .  Question :- Since this app was not installed from the deployer but I want to upgrade it via deployer what is the best practice and method to achieve this  Here is my plan , please correct me if I am thinking wrong  Step 1) First I will copy the installed folder from one of the SHC member to deployer under /etc/app so that it install itself on the deployer and then I can manually upgrade it using deployer GUI Step2) Once upgraded , I will copy upgraded app from /etc/apps folder to /etc/shcluster/apps folder  Step3) run apply shcluster-bundle on the deployer to push the upgraded app to SHC members . Do you think above is the right approach ? if not what else I can do   
Ugh....sorry.  I modified data in the examples as I was typing my last response, and didn't update each "table" as needed.  Here are correct values.  Sorry for the confusion!  I didn't see an option ... See more...
Ugh....sorry.  I modified data in the examples as I was typing my last response, and didn't update each "table" as needed.  Here are correct values.  Sorry for the confusion!  I didn't see an option to edit or delete my last response. Sourcetype=autos VIN MAKE MODEL 1234ABCD FORD GT ABCD1234 DODGE VIPER 1A2B3C4D CHEVROLET CORVETTE A1B2C3D4 AUDI     Sourcetype=cars SN MANUFACTURER PRODUCT 1234ABCD FORD GT ABCD1234 DODGE CARAVAN 1A2B3C4D CHEVY CORVETTE A1B2C3D4   A8   I'd like to compare the two sourcetypes and see the results where VIN=SN, but MAKE!=MANUFACTURER OR MODEL!=PRODUCT. (Caveat - if any events in either sourcetype contain a null value, they can be ignored/excluded by the search.) From the example data above, ideally the search would display the following fields, and results would contain these two events (because VIN and SN match, but "VIPER" does not equal "CARAVAN", and "CHEVROLET" does not equal "CHEVY").   VIN MAKE MODEL SN MANUFACTURER PRODUCT ABCD1234 DODGE VIPER ABCD1234 DODGE CARAVAN 1A2B3C4D CHEVROLET CORVETTE 1A2B3C4D CHEVY CORVETTE   Sorry again for the confusion.
Thanks for the response, Bowesmana.  Understood. Here are sourcetypes and field data examples. Sourcetype=autos VIN MAKE MODEL 1234ABCD FORD GT 1A2B3C4D CHEVROLET CORVETTE ABCD12... See more...
Thanks for the response, Bowesmana.  Understood. Here are sourcetypes and field data examples. Sourcetype=autos VIN MAKE MODEL 1234ABCD FORD GT 1A2B3C4D CHEVROLET CORVETTE ABCD1234 DODGE VIPER A12B3C4D AUDI     Sourcetype=cars SN MANUFACTURER PRODUCT 1234ABCD FORD GT ABCD1234 CHEVY CORVETTE 1A2B3C4D DODGE CARAVAN A1B2C3D4   A8   I'd like to compare the two sourcetypes and see the results where VIN=SN, but MAKE!=MANUFACTURER OR MODEL!=PRODUCT. (Caveat - if any events in either sourcetype contain a null value, they can be ignored/excluded by the search.) From the example data above, ideally the search would display the following fields, and results would contain these two events (because VIN and SN match, but "CHEVROLET" does not equal "CHEVY", and "VIPER" does not equal "CARAVAN").   VIN MAKE MODEL SN MANUFACTURER PRODUCT 1A2B3C4D CHEVROLET CORVETTE 1A2B3C4D CHEVY CORVETTE ABCD1234 DODGE VIPER ABCD1234 DODGE CARAVAN   Hope this helps to clarify.  Please let me know if you have any questions or suggestions.  I appreciate your help!
Thank you, good catch. Not sure how I messed that up. I have corrected that error now, but I still do not get multiple fields extracted from this single extraction.
hey @gcusello , JSON extractions will not work for this. The full event is not in JSON, only the data portion example event (without all of the claims)... TID: [-1234]  [2024-05-21 17:40:35,777] [... See more...
hey @gcusello , JSON extractions will not work for this. The full event is not in JSON, only the data portion example event (without all of the claims)... TID: [-1234]  [2024-05-21 17:40:35,777] [asdf-asdf-asdf-asdf-asdf ] INFO {AUDIT_LOG} - Initiator=initiator Action=action Target=target Data= {"Claims":{ "http://wso2.org/claims/username":"user", "http://wso2.org/claims/role":"role"}}
That's strange at every midnight  - sounds like gremlins are out to play! A few things you can check that may give you some clues (And as you have already stated its always better to use UF's/SC4S a... See more...
That's strange at every midnight  - sounds like gremlins are out to play! A few things you can check that may give you some clues (And as you have already stated its always better to use UF's/SC4S and not direct to Splunk, this is really for small environments/POCs etc.  As the HF is a full Instance and will parse data/forward etc, it might be worth having a look at the TcpOutputProc in Splunkd.log - or index=_internal sourcetype=splunkd host(YOUR HOST) log_level=WARN OR log_level=ERROR TcpOutputProc via Splunk search bar Else check for any ERROR's for the HF.  You might find some clues around, timeout, Queues being full or some invalid configuration. Perhaps increase the log level on the HF - can also be done via gui /opt/splunk/bin/splunk set log-level TcpOutputProc -level DEBUG Remember to turn it off after! - can also be done via gui /opt/splunk/bin/splunk set log-level TcpOutputProc -level INFO You could also do some checks on the Performance, memory, CPU / disk - get some of those stats, I have seen where the HF's used as syslog receivers with large volumes of streaming data stop the HF functioning, but that was at different times.  Optional and if you have enough memory and this is not the issue,   you could try to increase the memory queue size, server.conf on the HF and see if that helps.  example:  [queue] maxSize=<5000MB> I have also seen where a vulnerability scanner was preventing Splunk from not responding  at regular intervals. 
Hello Splunk Community, I'm encountering an issue with configuration replication in Splunk Cloud Victoria Experience when using search head clusters behind a load balancer. Here's the scenario: I h... See more...
Hello Splunk Community, I'm encountering an issue with configuration replication in Splunk Cloud Victoria Experience when using search head clusters behind a load balancer. Here's the scenario: I have developed a private custom search command app that requires some user configuration. For this purpose, I've added a custom config file in the /etc/apps/<appname>/default directory. Additionally, I've configured the app.conf as follows:   [triggers] reload.<custom_conf> = simple [shclustering] deployer_push_mode = full   I've also included a server.conf inside etc/apps/<appname>/default with the following configuration: [shclustering] conf_replication_include.<custom_conf_name> = true When attempting to install this private app using the install_app_from_file option in a Splunk Cloud Victoria Experience with search head clusters behind a load balancer, it appears that the app configuration is not being replicated across search heads. Could someone please assist me in identifying if there's anything I'm missing or doing incorrectly? Thank you. Avnish
Hi @gcusello  This the query which i am trying to map Interfacename and link .So i appended the inputlookup with base query .In base query also i have interface name.So i am trying to map the valu... See more...
Hi @gcusello  This the query which i am trying to map Interfacename and link .So i appended the inputlookup with base query .In base query also i have interface name.So i am trying to map the values.But the link is not populating in the table. index="mulesoft" environment=PRD | rename content.payload.Status as Status | append [ inputlookup link.csv | table Link InterfaceName] | stats values(content.payload.InterfaceName) as payLoadInterface values(content.payload.ErrorMessage) as ErrorMsg earliest(timestamp) as Timestamp values(priority) as Priority values(tracePoint) as Tracepoint values(Link) as Link values(InterfaceName) as Interface by correlationId | eval names = if ( isnull ( mvfind ( message, "DISABLED" ) ), null, message ) | eval Response= coalesce(SuccessResponse,Successresponse,msg,names,ErrorMsg) | eval InterfaceName= coalesce(Interface,payLoadInterface) | table Status Timestamp InterfaceName Link Response correlationId message Priority Tracepoint|fields - message Tracepoint Priority|search InterfaceName="*" | where Status LIKE ("%")|sort -Timestamp