All Topics

Top

All Topics

Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to... See more...
Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to change the value Mik with another value. Can I use python Api ? Splunk Cli ? Rest api ? It seem's to be a lot of solutions but for the moment I can't find one. Thanks
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to o... See more...
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to observe such behavior. I suspected the SHOULD_LINEMERGE setting, which can merge events with similar timestamps, but now I turned it off. Below the Splunk search window I attached a result from regex101.com that shows the working of my LINE_BREAKING option. I also tried to use the default ([\r\n]+) regex, but it did not work either. What did I miss in my approach? My props.conf for the desired sourcetype looks like: [vsi_esxi_syslog] LINE_BREAKER = ([\r\n]+)|(\n+) SHOULD_LINEMERGE = false TIME_PREFIX = ^<\d+>
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i ne... See more...
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i need to create a search/alert that notifies me if a host on this lookup stops sending data to index=abc I was trying something like this  search below, but now having much luck: | tstats count where index=abc host NOT [| inputlookup hosts.csv] by host   The lookup called hosts.csv is formatted with the column name being host, for example like:   host hostname101 hostname102 hostname103 hostname104  
Recently, Eric Fusilero, VP of Splunk Global Enablement and Education, shared his point of view about the crucial need to educate and train the next generation of cyber defenders. In his blog entitle... See more...
Recently, Eric Fusilero, VP of Splunk Global Enablement and Education, shared his point of view about the crucial need to educate and train the next generation of cyber defenders. In his blog entitled, Educating the Next Generation of Cyber Defenders, he emphasizes the importance of addressing the cybersecurity skills gap and fostering diversity in the cybersecurity workforce.  His post also highlights the National Cyber Workforce and Education Strategy (NCWES) announced by the Biden Administration and Splunk’s mission to fortify the nation's defenses against cyber threats aligns powerfully with the U.S. president’s initiative.  Take a quick read to learn more about how Splunk is working with industry, academia, and government agencies through our cybersecurity training programs, certification tracks, free learning opportunities, and partnerships with educational institutions.    – Callie Skokos on Behalf of the Splunk Education Crew  
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be o... See more...
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be okay. We do not see the evens in the index though. Configured inputs.conf in the /local folder as follows: [tcp-ssl://6514] disabled = false [SSL] requireClientCert = false serverCert = /opt/splunk/etc/auth/custom/combined.cer sslVersions = tls1.2 cipherSuite = AES256-SHA We have the input setup in the web interface and have the correct index and source defined. No events coming in though. I've seen several articles from multiple years back on configuring this. The TLS handshake works, what are we missing? Thanks in advance! FYI: Tried this over UDP using a non TLS input and the data comes in fine, but when we try with SSL it never shows up in the index.
Hi Team, We need your help in reviewing Data Collectors created in AppDynamics for .NET WCF service. We need your guidance as newly Data Collectors are not working. Thank you, Deepak Paste
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"somethi... See more...
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"something_long":"long json"}\n","stream":"stderr","docker":{"container_id":"d48887cdb80442f483a876b9f2cd351ae02a8712ec20960a9dc66559b8ccce87"},"kubernetes":{"container_name":"container","namespace_name":"namespace","pod_name":"service-576c4bcccf-75gzq","container_image":"art.com:6500/3rdparties/something/something-agent:1.6.0","container_image_id":"docker-pullable://art.com:6500/3rdparties/something/something-agent@sha256:02b855e32321c55ffb1b8fefc68b3beb6","pod_id":"3c90db56-3013a73e5","host":"worker-3","labels":{"app":"image-service","pod-template-hash":"576c4bcccf","role":"image-ervice"}},"level":"info","ts":1689074778.913063,"caller":"peermgr/peer_mgr.go:157","msg":"Not enough connected peers","connected":0,"required":1,"@timestamp":"2023-07-11T11:26:19.133326179+00:00"}} As you can see the timestamp is at the end. So I have setup my props.conf for the following: [elastic_logs] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = make sure timestamp is taken pulldown_type = 1 TIME_PREFIX = "@timestamp":\s*" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%z MAX_TIMESTAMP_LOOKAHEAD = 1000   I can see the timestamp in splunk entries, but that is all I can see now. all the other fields are not displayed. what am I doing wrong?
Hello, We have few apps that are no longer needed in our on premise environment. We maintain git repo for configs. Can anyone please help me with the steps to uninstall/remove the app.     Thanks
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 co... See more...
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 counts and is red, however i want to return the status with the red not just the number. I can return the status with  | stats status   but it is in black and white, any help is appreciated. 
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search... See more...
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search" info="completed"  
Hi! Faced with writing a query with an additional check and I can't find a way out. I will be glad if you tell me the direction or help with advice. We have the following custom logic: 1. When use... See more...
Hi! Faced with writing a query with an additional check and I can't find a way out. I will be glad if you tell me the direction or help with advice. We have the following custom logic: 1. When user do some action(it is not important) we generate an event in index=custom with the following fields: evt_id: 1,  user_id: 555 (example) 2. The user should confirm that he is doing this "some action" in third-party app, and this app generate to the index=custom the next event: evt_id: 2, user_id:555 (example) msg:confirmed 3. If user NOT CONFIRMED the SOME ACTION from step 1 - we need to generate alert. It means, that Splunk didn't receive evt_id:2 in index=custom  The alert logic is following: We need to alert when  evt_id: 1 was more than 5 minutes ago(the time that the user has to confirm "some action') and when NO evt_id: 2 with the same user_id by the time the alert starts working.  I understood that I need to do the first search like(example): index=custom evt_id=1 earliest=-5m latest=-7m But I have no idea how to implement additional condition with evt_id:2. if we didn't have the user_id field, then I could use stats  count command but I need  to correlate both events(1 and 2) with the field user_id.  Thanks for you help, have a nice day.
Splunk PS installed UBA a while back, and I just noticed that we are not getting OS logs from those servers into Splunk Enterprise.  Since we have a 10 node cluster, I was trying to find a quicker wa... See more...
Splunk PS installed UBA a while back, and I just noticed that we are not getting OS logs from those servers into Splunk Enterprise.  Since we have a 10 node cluster, I was trying to find a quicker way to manage them.  Is there a reason I shouldn't connect the Splunk Enterprise running on all of those nodes to the deployment server?
Hello, community, I wanted to share a challenge that I have mapping fields to Data Models.  The issue is that I have identified/created fields that are required for a Deta Set, but they are not aut... See more...
Hello, community, I wanted to share a challenge that I have mapping fields to Data Models.  The issue is that I have identified/created fields that are required for a Deta Set, but they are not auto-populating e.g. cannot be seen by the Data Model/Set. Any suggestions of where I might be getting wrong? Regards, Dan 
Is it possible to display textual (string) values instead of numbers on the Y axis? I have a time series with a field called "state", which contains an integer number. Each number represents a cer... See more...
Is it possible to display textual (string) values instead of numbers on the Y axis? I have a time series with a field called "state", which contains an integer number. Each number represents a certain state. Examples: 0="off", 1="on" 0="off", 1="degraded", 2="standby", 3="normal", 4="boost" Now I would like to have a line or bar chart showing the respective words on the Y axis ticks instead of 0, 1, 2, 3, 4. Note: This was already asked but not answered satisfactorily: https://community.splunk.com/t5/Splunk-Search/Is-it-possible-to-make-y-axis-labels-display-quot-on-quot-and/m-p/222217 
Hi Everyone!. I'm here to share the resolution for one of the frequent errors that we see in internal logs and sourcetype=splunkd. If you happen to encounter the below error, "Failed processing... See more...
Hi Everyone!. I'm here to share the resolution for one of the frequent errors that we see in internal logs and sourcetype=splunkd. If you happen to encounter the below error, "Failed processing http input , token name=token_name,parsing_err="Incorrect index", index=index_name" Please make sure that your index name is being added to the respective token(HEC). In order to avoid this error, make sure your index is added under the respective token as soon as a new index is created. [https://token name] disabled = 0 index=default_index name indexes=index1,index2, index3;[add your index here]   Cheers      
Hi i have the below data  _time SQL_ID NEWCPUTIME 2023-10-25T12:02:10.140+01:00 ABCD 155.42 2023-10-25T11:57:10.140+01:00 ABCD 146.76 2023-10-25T11:47:10.156+01:... See more...
Hi i have the below data  _time SQL_ID NEWCPUTIME 2023-10-25T12:02:10.140+01:00 ABCD 155.42 2023-10-25T11:57:10.140+01:00 ABCD 146.76 2023-10-25T11:47:10.156+01:00 ABCD 129.34 2023-10-25T11:42:10.163+01:00 ABCD 118.84 2023-10-25T12:07:10.070+01:00 ABCD 163.27 2023-10-25T11:52:10.150+01:00 ABCD 139.34   EXPECTED OUTPUT is   output       _time SQL_ID NEWCPUTIME delta 2023-10-25T12:07:10.070+01:00 ABCD 163.27 7.85 2023-10-25T12:02:10.140+01:00 ABCD 155.42 8.66 2023-10-25T11:57:10.140+01:00 ABCD 146.76 7.42 2023-10-25T11:52:10.150+01:00 ABCD 139.34 10 2023-10-25T11:47:10.156+01:00 ABCD 129.34 10.5 2023-10-25T11:42:10.163+01:00 ABCD 118.84 118.84   SPLUNK  output  which is not correct output       _time SQL_ID NEWCPUTIME delta 2023-10-25T12:07:10.070+01:00 ABCD 163.27   2023-10-25T12:02:10.140+01:00 ABCD 155.42 7.85 2023-10-25T11:57:10.140+01:00 ABCD 146.76 8.66 2023-10-25T11:52:10.150+01:00 ABCD 139.34 7.42 2023-10-25T11:47:10.156+01:00 ABCD 129.34 10 2023-10-25T11:42:10.163+01:00 ABCD 118.84 10.5   im using the below query  index=data sourcetype=dataset source="/usr2/data/data_STATISTICS.txt" SQL_ID= ABCD |streamstats current=f window=1 global=f last(NEWCPUTIME) as last_field by SQL_ID |eval NEW_CPU_VALUE =abs(last_field - NEWCPUTIME) |table _time,SQL_ID, last_field,NEWCPUTIME,NEW_CPU_VALUE   i tried using delta command as well however im not getting the expected output as well 
How we can measures number of spool in SAP systems using AppDynamics
Hi, I'd like to know how to associate the "url" tag with the web data model. We're currently working with URL logs in our Splunk ES, but we're encountering difficulties in viewing the data model whe... See more...
Hi, I'd like to know how to associate the "url" tag with the web data model. We're currently working with URL logs in our Splunk ES, but we're encountering difficulties in viewing the data model when conducting searches. Could someone kindly provide guidance on this matter? Thanks  
Hi, I aimed to merge the "dropped" and "blocked" values under the "IDS_Attacks.action" field in the output of the datamodel search and include their respective counts within the newly created "block... See more...
Hi, I aimed to merge the "dropped" and "blocked" values under the "IDS_Attacks.action" field in the output of the datamodel search and include their respective counts within the newly created "blocked" field. so that I can add it to the dashboard. output:   IDS_Attacks.action count allowed 130016 blocked 595 dropped 1123
Hi, Not sure how to fix continius bar between login and logout. As you can see on picture it's marked as login, lot of spaces and then logout. The best would be everything is color marked from login... See more...
Hi, Not sure how to fix continius bar between login and logout. As you can see on picture it's marked as login, lot of spaces and then logout. The best would be everything is color marked from login until logout. Though it could be done throug format, but not this time.  Hope someone can help me with it Rgds