All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the below query for an alert, but the result does not add host or description in the result, how can i achieve this?    
Hi All, I have logs like below and want to create a table out of it.   log1: GROUP TOPIC ... See more...
Hi All, I have logs like below and want to create a table out of it.   log1: GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID connect-ABC ABC.sinkevents 0 15087148 15087148 0 connector-consumer-ABC /10.231.95.96 connector-consumer-ABC.sinkevents-0 log2: GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID connect-XYZ XYZ.cardtransactionauthorizationalertsent 0 27775 27780 5 connector-consumer-XYZ /10.231.95.97 connector-consumer-XYZ.Cardtransactionauthorizationalertsent-0 connect-XYZ XYZ.cardtransactionauthorizationalertsent 1 27740 27747 7 connector-consumer-XYZ /10.231.95.97 connector-consumer-XYZ.Cardtransactionauthorizationalertsent-0 connect-XYZ XYZ.cardtransactionauthorizationalertsent 2 27836 27836 0 connector-consumer-XYZ /10.231.95.97 connector-consumer-XYZ.Cardtransactionauthorizationalertsent-0   I created the query which give the below table:   .... | rex field=_raw "CLIENT\-ID\s+(?P<Group>[^\s]+)\s(?P<Topic>[^\s]+)\s(?P<Partition>[^\s]+)\s+(?P<Current_Offset>[^\s]+)\s+(?P<Log_End_Offset>[^\s]+)\s+(?P<Lag>[^\s]+)\s+(?P<Consumer_ID>[^\s]+)\s{0,20}(?P<Host>[^\s]+)\s+(?P<Client_ID>[^\s]+)" | table Group,Topic,Partition,Lag,Consumer_ID   Group Topic Partition Lag Consumer_ID connect-ABC ABC.sinkevents 0 0 connector-consumer-ABC connect-XYZ XYZ.cardtransactionauthorizationalertsent 0 5 connector-consumer-XYZ Here I am missing the last 2 lines of log2.  I want to modify the query in a way that it produces the table in below manner: Group Topic Partition Lag Consumer_ID connect-ABC ABC.sinkevents 0 0 connector-consumer-ABC connect-XYZ XYZ.cardtransactionauthorizationalertsent 0 5 connector-consumer-XYZ connect-XYZ XYZ.cardtransactionauthorizationalertsent 1 7 connector-consumer-XYZ connect-XYZ XYZ.cardtransactionauthorizationalertsent 2 0 connector-consumer-XYZ   Please help me to modify the query in a way to get my desired output. Your kind help on this is highly appreciated. Thank You..!!
Hi, I have installed a Splunk instance that serves as a search head and now I need to install another instance to serves as a heavy forwarder. However when I download the Splunk file and extrac... See more...
Hi, I have installed a Splunk instance that serves as a search head and now I need to install another instance to serves as a heavy forwarder. However when I download the Splunk file and extract it on a different directory than my first instance it tells me that port 8000 is being used and then it asks me to give it different ports since other daemon ports are being used as well. Is this normal? Is this the standard procedure to do this? I just need both instances to be running on port 8000 on the same VM. Also I need to ssh into my search head instance however when I run ssh [hostname]@[private-ip:8000] I get an error saying "could not resolve hostname". I would really appreciate some guidance. Thanks.
Hi, I have installed a Splunk instance that serves as a search head and now I need to install another instance to serves as a heavy forwarder. However when I download the Splunk file and extract it ... See more...
Hi, I have installed a Splunk instance that serves as a search head and now I need to install another instance to serves as a heavy forwarder. However when I download the Splunk file and extract it on a different directory than my first instance it tells me that port 8000 is being used and then it asks me to give it different ports since other daemon ports are being used as well. Is this normal? Is this the standard procedure to do this? I just need both instances to be running on port 8000 on the same VM. Also I need to ssh into my search head instance however when I run ssh [hostname]@[private-ip:8000] I get an error saying "could not resolve hostname". I would really appreciate some guidance. Thanks.
I believe the 8.6 version is missing a few default lookups. I receive an error about unable to find "nix_fs_notification_change_type" lookup whenever we search.  if you look at the doc and compare it... See more...
I believe the 8.6 version is missing a few default lookups. I receive an error about unable to find "nix_fs_notification_change_type" lookup whenever we search.  if you look at the doc and compare it to the \Splunk_TA_nix\lookups dir, there are at least 5 lookups missing.  In 8.5 all 10 lookups are present.  https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Lookups. I suggest maybe copying the missing lookups or just staying on 8.5. 
Hi, Is it possible to Make a table like in the example below, that would refresh every 10 minutes and update the status column to either Arrived or Delayed status and change the color of that row t... See more...
Hi, Is it possible to Make a table like in the example below, that would refresh every 10 minutes and update the status column to either Arrived or Delayed status and change the color of that row to either green or red.  
Hi team,    We are logging the File copy logs, Application logs into Splunk and using Splunk alerting for file not copied scenarios and connectivity issues.  Along with this alerting we want to tak... See more...
Hi team,    We are logging the File copy logs, Application logs into Splunk and using Splunk alerting for file not copied scenarios and connectivity issues.  Along with this alerting we want to take actions based on the connectivity issues/ file not copied scenarios.  So can anyone please share me any scenarios along with examples.    In brief do you have any feature to take action based on the logs from the Splunk.  Also please let me know features of Splunk other than alerting to add the business value.    Thanks in advance,   
Hello everyone With some embarrassment I confess that I do not know how to use the lookup command and although I have read the documentation I have not been able to. I have an index called antiviru... See more...
Hello everyone With some embarrassment I confess that I do not know how to use the lookup command and although I have read the documentation I have not been able to. I have an index called antivirus and one of the fields is "application" from which I have obtained a list of all the programs installed on the users' computers. Now my client has returned a response with the list of programs that are authorized and I must add an exception to them.   this is the SPL code that I currently use index=antivirus event=KLNAG_EV_INV_APP_INSTALLED |search Aplicacion!="*teams*" Aplicacion!="*Adobe*" Aplicacion!="*java*" Aplicacion!="*skype*" Aplicacion!="*365*" Aplicacion!="*kaspersky*" Aplicacion!="*chrome*" Aplicacion!="*SAP*" Aplicacion!="*SQL*" Aplicacion!="*visual studio*" Aplicacion!="*office*" Aplicacion!="*Microsoft OneDrive*" Aplicacion!="Microsoft Edge" Aplicacion!="WebView2 Runtime de Microsoft Edge" Aplicacion!="zoom" Aplicacion!="Hyland Unity Client [Unity_Prod]" Aplicacion!="Microsoft Windows QFE" Aplicacion!="Offimizer" (here i need use lookup command) | stats count by Aplicacion IP message |sort - count   now, I know that I have the lookup editor plugin, I suppose that from there I can upload the file, my question is if it can be in .xlsx or if it has to be .csv   and 100 more
Hoping someone can point me in the right direction. Our Splunk monitoring keeps reporting 90-100% CPU utilization however when checking the  servers one core will be close to maxing during a few func... See more...
Hoping someone can point me in the right direction. Our Splunk monitoring keeps reporting 90-100% CPU utilization however when checking the  servers one core will be close to maxing during a few functions for up to 20 min but the rest of the cores are quite low with no perf issues with the server. So looking for a better way to report, is there a core level monitoring or a field I can add to the CPU monitoring to address this?  Thank you in advance.
Hello,  I have onboarded the data into Splunk which we have multiple timestamps in the event in different formats. I believe my props settings are correct however it's giving an error in Splunkd.lo... See more...
Hello,  I have onboarded the data into Splunk which we have multiple timestamps in the event in different formats. I believe my props settings are correct however it's giving an error in Splunkd.log. Please Advise Error Details : DateParserVerbose [99999 merging_0] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (16) characters of event. Defaulting to timestamp of previous event Event Details:  Jul 10 14:19:08 abcdefgh81 dnsmask Jul 10 14:19:08 dnsmask[1520]: cached abcdefg43.wellness.com is 10.220.200.72 Jul 10 14:19:08 abcdefgh81 dnsmask -- [10/July/2022:18:10:10 -9900] dnsmask[1520]: cached abcdefg43.wellness.com is 10.220.200.72 Here are my props settings TIME_PREFIX=^ TIME_FORMAT=%b %d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 16    
Getting 404 errors when trying to access the MC Summary and Health Check pages after an upgrade from 8.1.5 to 8.2.7.   First error is: monitoringconsole_landing.js:283 Uncaught (in promise) TypeErr... See more...
Getting 404 errors when trying to access the MC Summary and Health Check pages after an upgrade from 8.1.5 to 8.2.7.   First error is: monitoringconsole_landing.js:283 Uncaught (in promise) TypeError: _swcMc.ThemeUtils.getReactUITheme is not a function etc. etc. Any seen something similar?  Other pages in MC console work fine.  
My question is about this solution:  https://community.splunk.com/t5/Alerting/How-can-I-query-to-get-all-alerts-which-are-configured/m-p/288846#M9051  I do not have Admin rights. When I run this qu... See more...
My question is about this solution:  https://community.splunk.com/t5/Alerting/How-can-I-query-to-get-all-alerts-which-are-configured/m-p/288846#M9051  I do not have Admin rights. When I run this query  I get the following warning: "Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability". In the result I get only a partial listing. Anything I can do besides engaging admins to run the query for me? We use Splunk Enterprise Version: 8.2.1
We are, unfortunately, having to change index names to match a naming convention.  I have a list of indexes that need to be "renamed" ... for routing to the new indexes, rather than track down all th... See more...
We are, unfortunately, having to change index names to match a naming convention.  I have a list of indexes that need to be "renamed" ... for routing to the new indexes, rather than track down all the configs which are managed with DS, chef, and other tools, I would like use props\transforms on the HWF's and IDX's to look at every event, and if it was destined for index foo, send it to index foo.   Here is what I have on a single Splunk instance, in system/local (for simplicity and testing the config): props:       [default] TRANSFORMS-foo_idx_rename= foo_idx_rename       transforms:       [foo_idx_rename] SOURCE_KEY = MetaData:Index REGEX = (foo) DEST_KEY = _MetaData:Index FORMAT = bar       I've tried: REGEX = foo REGEX = "foo" REGEX = "*foo*" REGEX = index::foo Nothing I've tried seems to work. Questions I have: What is the actual value that the regex needs to evaluate for the index metadata field? (ie-  index::foo or just foo) Are double quotes required in the regex?  Must there be ()'s? I've seen a couple examples that say they work, but when I copy them verbatim, they do not.
Hi, Have following query: 1) Does Splunk provides a detailed document/write ups for Architecting observability of Apigee (Private Cloud)   for Apigee Platform components and Apigee API Proxies ? ... See more...
Hi, Have following query: 1) Does Splunk provides a detailed document/write ups for Architecting observability of Apigee (Private Cloud)   for Apigee Platform components and Apigee API Proxies ?  2) Any detailed documentation  on Architecting and configuring the observability with respect to compliance /alerts when handling Apigee Private Cloud ?  
Is there any API we could use to query Splunk performance/monitoring metrics. We want to leverage the data for our internal analysis. We see the data in the monitoring console but we want to programm... See more...
Is there any API we could use to query Splunk performance/monitoring metrics. We want to leverage the data for our internal analysis. We see the data in the monitoring console but we want to programmatically query the data.
First, let me explain my intention: I am attempting to create a query that would notify our team of a “stuck order”.  An order is “stuck” when one team has produced an event and another team has not... See more...
First, let me explain my intention: I am attempting to create a query that would notify our team of a “stuck order”.  An order is “stuck” when one team has produced an event and another team has not responded to said event. In this specific case, one team is producing an AuthorizationSucceeded event and another team is expected to produce a FraudDeclined/Approved event.  I have tried using map, but I need to find the orderId's that do not exist in the second search, so I have moved on to subsearches using NOT. Here is my current query, but it is not producing the results I want. Ideally I want a list of orderIds that exist in: index=app_pci source=http:nepp host=nepp-service-v3-prod message.message="Attempt to produce Kafka event finished: AuthorizationSucceeded*" but not in index=app_pci source=http:nepp host=nepp-service-v3-prod message.message="Attempt to produce Kafka event finished: AuthorizationSucceeded*""  Here is my query thus far: index=k8s_main container_name=fraud-single-proxy-listener message="Successfully handled AuthorizationSucceeded event*" NOT [search index=app_pci source=http:nepp host=nepp-service-v3-prod message.message="Attempt to produce Kafka event finished: AuthorizationSucceeded*" | rename properties.orderId as contextMap.orderId | table contextMap.orderId] Any Help would be amazing
We would like to track our Splunk Enterprise Cluster performance to keep an eye on whether we have sufficient resources allocated, as part of this we would like to track average search queue volume a... See more...
We would like to track our Splunk Enterprise Cluster performance to keep an eye on whether we have sufficient resources allocated, as part of this we would like to track average search queue volume and wait time but I have had a hard time finding any way to generate this data.  Is this data exposed anywhere for searching in Splunk? we are using the MC saturated event queue for the indexers already as well as CPU / Mem usage for both indexers and search heads. 
Need help in building Rest API in splunk ES for Oracle IDCS
HI, We are trying to process and  ingest  aws s3 events into splunk, but noticed few events are getting split, after checking the configuration we realized this should be caused by splunk internal ... See more...
HI, We are trying to process and  ingest  aws s3 events into splunk, but noticed few events are getting split, after checking the configuration we realized this should be caused by splunk internal parsing algorithm.  Please let us know if there is any issues in my configuration or could it be something related to splunk parser? Below is the entries on props and transform.conf: props--> [proxy] REPORT-proxylogs-fields = proxylogs_fields,extract_url_domain LINE_BREAKER = ([\r\n]+) # EVENT_BREAKER = ([\r\n]+) # EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = false CHARSET = AUTO disabled = false TRUNCATE = 1000000 MAX_EVENTS = 1000000 EVAL-product = "Umbrella" EVAL-vendor = "xyz" EVAL-vendor_product = "abc" MAX_TIMESTAMP_LOOKAHEAD = 22 NO_BINARY_CHECK = true TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S TZ = UTC   Transforms.conf --> [proxylogs_fields] DELIMS = "," FIELDS = Timestamp,policy_identities,src,src_translated_ip,dest,content_type,action,url,http_referrer,http_user_agent,status,requestSize,responseSize,responseBodySize,sha256,category,av_detection,pua,amp_disposition,amp_malwarename,amp_score,policy_identity_type,blocked_category,identities,identity_type,request_method,dlp_status,certificate_errors,filename,rulesetID,ruleID,destinationListID,s3_filename example of the events: "2022-06-27 08:57:14","wer.com","1.1.1.1","1.1.1.1","10.10.10.10","image/gif","ALLOWED","https://www.moug.net/img/btn_learning.gif","https://www.mikhgg.net/tech/woopr/0025.html","Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.124 Safari/537.36 Edg/102.0.1245.44","200","","3571","3328","1a146b09676811234dddccd6dc0ee3cf11aa1803e774df17aa9a49a7370a40ec","Allow List,Fashion","","","","","","AD Users","","wer.com","AD Users,Network Tunnels","GET","ALLOWED","","btn_learning.gif","13347559","346105","15065619",2022-06-27-09-50-ade8.csv.gz   Events as seen in splunk:    
Does anyone know how I can integrate sentinel one with splunk? Is there any documentation I can follow or something?   Thank you.