All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm developing an inotify-based daemon to report core-dumps occasionally occurring in a directory. Reporting the fact of the crash is easy, but we also want to include the debugger's output (showi... See more...
I'm developing an inotify-based daemon to report core-dumps occasionally occurring in a directory. Reporting the fact of the crash is easy, but we also want to include the debugger's output (showing the full stack at the time of crash). Should I make that simply one large text-field (with multiple newlines): { "PID": 1111, "stack": "#0 0x00000001 in ?? ()\n#1 0x28098e5f in xo_attr (name=0x5 <Address 0x5 out of bounds>, fmt=0x0) ...." } or a list of lines: { "PID": 1111, "stack": [ "#0 0x00000001 in ?? ()", "#1 0x28098e5f in xo_attr (name=0x5 <Address 0x5 out of bounds>, fmt=0x0)", ... ] } Are there technical pluses/minuses to either approach, or is it just a matter of taste? For example, would Splunk's additional tools (such as the Patterns-seeker) prefer one method over another?
I have a couple orphaned searches owned by a user who is no longer with the company ( his user id was deleted ) . Im trying to re enable those searches but im unable to find them under Reassign Knowl... See more...
I have a couple orphaned searches owned by a user who is no longer with the company ( his user id was deleted ) . Im trying to re enable those searches but im unable to find them under Reassign Knowledge Objects . Im logging as an admin so this should not be an access/permission issue . Any Idea ? Thanks
Symptom: Our authentication datamodel is showing user=Unknown for events that have a username defined in the log. Example: 2020-02-07 09:31:11,161 - xxx.xxx.xxx.xxxx - INFO [net.shibboleth.i... See more...
Symptom: Our authentication datamodel is showing user=Unknown for events that have a username defined in the log. Example: 2020-02-07 09:31:11,161 - xxx.xxx.xxx.xxxx - INFO [net.shibboleth.idp.authn.duo.impl.ValidateDuoWebResponse:200] - Profile Action ValidateDuoWebResponse: Duo authentication succeeded for 'user1' 2020-02-07 09:31:10,527 - xxx.xxx.xxx.xxx - INFO [net.shibboleth.idp.authn.impl.ValidateUsernamePasswordAgainstLDAP:152] - Profile Action ValidateUsernamePasswordAgainstLDAP: Login by 'user2' succeeded The fields look ok, except user=unknown (respectively, below): action = success | app = shibboleth | src_user = user1 | tag = authentication tag = success | user = unknown action = success | app = shibboleth | src_user = user2 | tag = authentication tag = success | user = unknown I thought that adding a field alias in the props.conf for this app would do the trick... But it still seems to display user=unknown for the datamodel... Here is an eval expression for the datamodel definition: src_user=if(isnull(src_user) OR src_user="","unknown",src_user), user=if(isnull(user) OR user="","unknown",user)
I am currently ingesting O365 Exchange and Mimecast logs in to Splunk, but I would like to start ingesting any links that are contained in the body of emails to allow additional security checks. Has ... See more...
I am currently ingesting O365 Exchange and Mimecast logs in to Splunk, but I would like to start ingesting any links that are contained in the body of emails to allow additional security checks. Has anyone come across a good way to achieve this? Thanks
I see that when i reassigning ownership the schedule wont kick in (next_scheduled_time just reads none), for example until i open the search and manually hit save it seems like none of them will run ... See more...
I see that when i reassigning ownership the schedule wont kick in (next_scheduled_time just reads none), for example until i open the search and manually hit save it seems like none of them will run on the original set time. anyone ever run into this before? is there a rest call i can do to change the ownership based off the old owner?
Trying to setup the app on 7.3.0, I am able to see the device groups and Activity groups when entering the EH ip and api key during the configuration process within the Extrahop app, the Data Inputs ... See more...
Trying to setup the app on 7.3.0, I am able to see the device groups and Activity groups when entering the EH ip and api key during the configuration process within the Extrahop app, the Data Inputs are created in the add in, however there is nothing being logged.
Hi, While configuring SIM in machine-agent docker with container monitoring it is failing with below error. [Docker-Monitoring-0] 10 Feb 2020 13:03:08,624 WARN DockerRegistrationTask - Error regi... See more...
Hi, While configuring SIM in machine-agent docker with container monitoring it is failing with below error. [Docker-Monitoring-0] 10 Feb 2020 13:03:08,624 WARN DockerRegistrationTask - Error registering containerId : a219e6035bf4ab0600b901dd82d2b6a41b240b190f4b9b18384fba223fb1b563 with error {} com.appdynamics.voltron.rest.utils.RestException: The requested object was not found. Please make sure you are attempting to access visible data. Another user may have deleted the data. This may also be a problem with the server. If the problem persists, please contact support. (404 - MISSING_ERROR) * Additional Info:Unable to find matching machine instance with hostId: a219e6035bf4, and the host machine with hostId: ip-10-105-2-16.us-west-2.compute.internal. Container registration without application app agent is not supported yet. * Server stack trace: Hidden at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119) at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:269) at com.fasterxml.jackson.databind.deser.std.ThrowableDeserializer.deserializeFromObject(ThrowableDeserializer.java:145) at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:148) at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3814) at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2858) at com.appdynamics.voltron.rest.client.VoltronErrorDecoder.decode(VoltronErrorDecoder.java:50) at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:126) at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:74) at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:97) at com.sun.proxy.$Proxy89.registerMachine(Unknown Source) at com.appdynamics.sim.agent.extensions.docker.DockerRegistrationTask.registerLightSystemAgent(DockerRegistrationTask.java:192) at com.appdynamics.sim.agent.extensions.docker.DockerRegistrationTask.call(DockerRegistrationTask.java:256) at com.appdynamics.sim.agent.extensions.docker.DockerRegistrationTask.call(DockerRegistrationTask.java:56) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [extension-scheduler-pool-2] 10 Feb 2020 13:03:08,663 WARN DockerMonitor - LightAgent not found in LightAgentRegistry for container - a219e6035bf4ab0600b901dd82d2b6a41b240b190f4b9b18384fba223fb1b563 [extension-scheduler-pool-7] 10 Feb 2020 13:03:37,363 INFO ServersDataCollectorManager - There is change in components collection configurations.
Hi All, I want to create timechart which shows CPU utilization for each clienthost. There is only one event that occurs every 24 hour. That event has 12 CPU utilization values. Each value is t... See more...
Hi All, I want to create timechart which shows CPU utilization for each clienthost. There is only one event that occurs every 24 hour. That event has 12 CPU utilization values. Each value is the cpu utilization for each 2 hour in the last 24 hour i.e. if this is the event for that got indexed at 18:00 hour today 10/2/2020 , cpuoverall: 1.51,9.47,1.70,1.45,1.51,1.47,1.46,1.46,1.48,1.48,1.50,1.50 then; cpu1=1.59 is the cpu utilization for the 2 hours between, 18:00 and 20:00 of 09/2/2020. cpu2=9.47 is the cpu utilization for the next 2 hours between, 20:00 and 22:00 of 09/2/2020. cpu3=1.70 is the cpu utilization for the next hours between, 22:00 and 00:00 of 09/2/2020. cpu4=1.45 is the cpu utilization for the next 2 hours between, 00:00 and 02:00 of 10/2/2020 and so on. How can I create a timechart (as a line graph) for these values, in which each cpu value is marked on that particular hour of the day on the chart ? In the end the chart should be a line graph where each line represent different client host, and the cpu values should be spread across the time. Please let me know if it is possible. I have tired it by mapping time and cpu values but I am not able to create a graph
Hello community,  I encountered a problem with BT exclusion.  I configured a rule (scope is good) to exclude a BT from automatically being discovered: URI contains 'socket'. The config is pr... See more...
Hello community,  I encountered a problem with BT exclusion.  I configured a rule (scope is good) to exclude a BT from automatically being discovered: URI contains 'socket'. The config is present in the transaction.xml file of my agent:  <servlet-entry-points enabled="true"> <custom/> <automatic-transaction-discovery enabled="true" resolution="first-entry-point"> <excludes> <exclude name="SocketExclude"> <servlet-rule> <enabled>true</enabled> <priority>2</priority> <excluded>false</excluded> <uri filter-type="CONTAINS" filter-value="/socket"/> </servlet-rule> </exclude> but the agent is still registrering BT :  [AD Thread Pool-Global19] 10 Feb 2020 16:42:16,542 INFO BusinessTransactionRegistry - Registered BT Name[/myapp/socket/750/oa4riml3/xhr_streaming], Id[540] [AD Thread Pool-Global19] 10 Feb 2020 16:42:16,542 INFO BusinessTransactionRegistry - Registered BT Name[/emyapp/socket/info], Id[541] [AD Thread Pool-Global19] 10 Feb 2020 16:42:16,542 INFO BusinessTransactionRegistry - Registered BT Name[/myapp/socket/750/gxqnx3vy/websocket], Id[542] And I find them in the controller. Am I doing something wrong?  Besides I cannot exclude manually in the BT screen because par 3 and 4 of the URI are dynamic.  Thank you
I have a dashboard that queries a Lookup file. The Lookup file contains a column containing Date Timestamps in this format DD/MM/YY. The column name in the Lookup is Date. It is called "Date (DD/M... See more...
I have a dashboard that queries a Lookup file. The Lookup file contains a column containing Date Timestamps in this format DD/MM/YY. The column name in the Lookup is Date. It is called "Date (DD/MM/YY)" in the dashboard statistics panel. I am converting that DD/MM/YY string to Unix time in the drill-down using something like this: | eval unixtime=strptime('Date',"%d/%m/%y") Which gives results like this: Date unixtime 06/02/20 1580947200.000000 1580947200.000000 Is equivalent to: 02/06/2020 @ 12:00am (UTC) That's a good start, but I want the drill-down search to search that entire 24 hour period. So all of 06/02/20, 24 hours. Something like this seems like it would work. <eval token="earliest">strptime($row."Date (DD/MM/YY)"$,"%d/%m/%y")</eval> <eval token="latest">strptime($row."Date (DD/MM/YY)"$,"%d/%m/%y")+86400</eval> 86400 being the number of seconds in a day. But I can't quite get it working. Can anyone point me in the right direction?
Hello experts I have a DB Connect connection to my DB that validates. The query that I send to the DB is displayed here: WITH "dte" as (SELECT * FROM "T_AUDIT_LOG_HISTORY" UNION SELECT * FROM "... See more...
Hello experts I have a DB Connect connection to my DB that validates. The query that I send to the DB is displayed here: WITH "dte" as (SELECT * FROM "T_AUDIT_LOG_HISTORY" UNION SELECT * FROM "T_AUDIT_LOG" ) select * from "dte" where "UN_ID" > ? ORDER BY "UN_ID" ASC I use a rising value on column 10 ("UN_ID") which is a integer unique identifier that increases for every new record. This table is not updated. Only inserts arrive. The first column has a timestamp that I link to the _time internal field. What I would expect is that every unique id is imported just once, but this is not the case. Every 15 minutes it imports a full copy of the whole table... Here is my config file for this connector: [AUDIT_LOG_HIST] connection = Production disabled = 0 host = XXX_PROD index = xxx index_time_mode = dbColumn input_timestamp_column_number = 1 interval = */15 * * * * mode = rising query = WITH "dte" as (SELECT * \ FROM "T_AUDIT_LOG_HISTORY"\ UNION\ SELECT * \ FROM "T_AUDIT_LOG"\ )\ select *\ from "dte"\ where "UN_ID" > ?\ ORDER BY "UN_ID" ASC query_timeout = 60 sourcetype = audit:log tail_rising_column_number = 10 I would only need the new ids so I don't see any doubles in my index. Thanks in advance P
I'm currently working through each of my companies Java apps and updating their sourcetypes using transforms and regexing each sourcetype. With a few exceptions, most apps will have an app, access an... See more...
I'm currently working through each of my companies Java apps and updating their sourcetypes using transforms and regexing each sourcetype. With a few exceptions, most apps will have an app, access and audit log. The one issue i've now run into is that one of the apps we use has several logs that would fall under the "app log" remit however, the log formatting is completely different so there is no way to use the standard regex we use for app logs. for example, a standard app log would have each entry prefixed with the following date/time: 2020-02-10T00:02:39,851 The app i'm currently working on has an app log of: Feb 10, 2020 10:40:03 AM GMT Is it possible to have multiple BREAK_ONLY_BEFORE regexes for a sourcetype in props.conf? i'm trying to avoid having to create a brandnew sourcetype just for one apps app log. i hope this question makes sense. please let me know if you need any more information.
I am using Splunk Cloud environment. I am interested to know how many buckets created for an index and what will be default size of a bucket. Issue in my environment: We have on-boarded some log... See more...
I am using Splunk Cloud environment. I am interested to know how many buckets created for an index and what will be default size of a bucket. Issue in my environment: We have on-boarded some log files into Splunk couple of months back but the timestamp of those logs show as older date from the year 2016. The log format contains time but not the date. When I reffered below link, Splunk should automatically apply the date and it will match mostly with system time. But having an event with 4 years old date is incorrect. https://docs.splunk.com/Documentation/Splunk/7.2.4/Data/HowSplunkextractstimestamps How bucket date time gets created? As per documentation, below is the format of bucket name: db___ https://docs.splunk.com/Documentation/Splunk/8.0.1/Indexer/HowSplunkstoresindexes Will that bucket name gets created with the actual earliest date time present in the bucket or based on the first event which is present in the bucket? Example: New hot buckets created on 05th Jan 2020, so, it contain the first event as 01/05/2020 01:00:00:345. But due to incorrect time stamp assignment as I explained above, it has an event with 04/03/2016 01:00:00:211 (4 years old timestamp). Now, when it roll to cold bucket, what will be the name? Will it be db_Feb10th2020_Jan5th2020_ or db_Feb10th2020_Apr3rd2016_
Hi everyone, Trying to find out the top 10 values from different host long_message index functionality.. So tried like index=* "error" OR "FAIL" OR "fatal"| stats values (functionality) values(... See more...
Hi everyone, Trying to find out the top 10 values from different host long_message index functionality.. So tried like index=* "error" OR "FAIL" OR "fatal"| stats values (functionality) values(correlatioid) values(loan_num) values(host) count by log_message | sort -count So it is showing top errors with functionality host loan_num details for each and every error. My requirement is i want achieve top errors count from particular host or fuctionality.. It is showing like Functionality: Abc Xyz 123 Let's say If the Abc functionality has more errors.. in the table it should give the count of Abc along with percentage among all the obtained errors.. Like this.. Functionality: Abc- 109 98% amoung Xyz - 1 1% 123 1 1% Any suggestions? Similarly i want see the top errors causing from different sources..
Hi *, basically this is not a real question but more an analysis of the somewhat broken syslog format of some messages issued by ESXi. No answers are expected, but comments are welcome, especially... See more...
Hi *, basically this is not a real question but more an analysis of the somewhat broken syslog format of some messages issued by ESXi. No answers are expected, but comments are welcome, especially if you are hit by the problems described here... Our setup is a pretty standard setup according to the Splunk syslog Forwarder recommendations: We are using part of the Splunk Add-on for VMware (Version 3.4.6, only the Splunk_TA_esxilogs) for indexing the ESXi logs into Splunk. The syslogs from the ESXi hosts are forwarded (with UDP or TCP) to a central (r)syslog-server, which itself is a Splunk Universal Forwarder and sends all received logs from various devices to the Splunk indexer. The syslog parameters on the ESXi hosts have been changed for longer syslog sizes (i.e. 4096 bytes instead the usual 1024 bytes). Our observations: Large ESXi syslog messages are identified and indexed as expected -- mainly, with some exceptions!! But: The "health check" of the monitoring console often finds some recognition problems related to timestamps and line breaks. All the events with the problems have the sourcetype vmw-syslog , which is used by the installed TA_esxilogs as an temporary sourcetype. If you use this TA, then searching and finding this sourcetype in your index may be an indication that you are affected by the problems described in this article. Sometimes a very large number of VMware ESXi events (sometimes millions!) are indexed on just one (!) timestamp, this usually correlates with the timestamp recognition problems above. Fiddling with the LINE_BREAKER, SHOULD_LINEMERGE, TZ, timestamp format or other pre-indexing conversions really do not help. The are various other question and answers regarding ESXi syslogs at Splunk Answers, but none of these help us to get rid of the problem... The Wireshark analysis: Time for examining ESXi syslogs on the network packet level with Wireshark. We captured the syslog traffic on the (r)syslog-server at the incoming network interface to catch the packets in exactly the same format as they are sent from the ESXi host. The format of some ESXi syslog messages is badly broken! ESXi uses a funny kind of multiline syslog message for a few events. And these events are chunked into packets of less than 1024 bytes regardless of the syslog packte size set on the ESXi host (see above). The first packet (a "syslog line") is correct according to the syslog packet format: <12> [priority] 2020-02-06T10:35:44.222Z [timestamp] bxa-b4... [hostname] VSANMGMTSVC: [process] ... [syslog message text] \n [terminating LF] So far, so good. BUT the next 2-3 continuation lines are just totally mangled up and wrong for syslog packets (according to the RFC, which by the way does not define multiline syslogs...): The continuation lines start with the priority field, I would accept this, but there is NO timestamp, instead the next few bytes of the syslog message text are following, and now it gets ugly: the packet actually CONTAINS both the hostname and the process, followed by more text of the syslog message and the terminating LF. This funny kind of continuation lines goes on, until the current "long event message text" of this single message is processed completly. Then the next "normal" syslog message follows. This broken format cannot be fixed easily with props or transforms inside Splunk! To repeat very clearly: This is the format ORIGINATING from the ESXi host as captured directly on the wire without any processing! Attached is a sample of one of the broken messages captured by Wireshark: textual output from the Wireshark capture. IP- and MAC-addresses have been shortened. We will file a bug report with VMware, but expectations regarding getting a fix is very low... Hope, this helps others wondering about issues with ESXi logging... Have fun and happy Splunking! Stephan
I am using regex to extract a field but I need 2 different regex. so under transforms.conf I made 2 different regex but with the same field, under props I called them. I seek to achieve 3 things... See more...
I am using regex to extract a field but I need 2 different regex. so under transforms.conf I made 2 different regex but with the same field, under props I called them. I seek to achieve 3 things, 1- mask data in uri if needed 2- concatenate fields if masked 3- extract uri URIs come in 2 different forms 1- uri_path all letters with 1 field to extract: i.e /Core/Test/ 2- uri_path_profile letters & numbers with 3 fields to extract i.e */Test/?id={NIN}&contactType={type} * where NIN is any 10 digit combination, and type is one out 3 possible strings transforms.conf #Field extraction for uri path [uri_path] REGEX = uri":"([\/A-Za-z]+) FORMAT = uri::$1 [uri_path_profile] REGEX = uri":"([\/A-Za-z]+)\?id=(\w+)&contactType=(\w+) FORMAT = uri::$1?id=NIN&contactType=$3 NIN::$2 contact_type::$3 My end goal is to have both extracted regex into one field called uri but since the fields in the 2nd stanza is dynamic and will have a lot of entries i'd like them to all be made into one which would be uri=/Test/?id=NIN&contactType=(group_3_value) so even if NIN has thousands of different records it will only show the 3 different strings at the end is this doable?
Hi all, I'm trying understending how TA-MS-AAD add-on works. I configured a data input to collect data about billing and Consumption setting interval to 600 and Max days to query 4 on my local ... See more...
Hi all, I'm trying understending how TA-MS-AAD add-on works. I configured a data input to collect data about billing and Consumption setting interval to 600 and Max days to query 4 on my local instance. I'm receiveng data about billing (sourcetype="azure:billing"); data are about every istances and I'receiving daily costs. However for some days I'm not receiving data (ex: I have data for 2020-02-02 - 2020-02-03 - 2020-02-05 but not for 2020-02-04). Is it normal? Does anyone know a guide to configure data input correctly? Thank you Giorgio
I have a JSON file like this. { "ACC_NAME": "A", "DEPT": [{ "NAME": "D1", "PROJECT": [{ "P_NAME": "xyz", "START_DATE": "1/5/2020 5:01:00... See more...
I have a JSON file like this. { "ACC_NAME": "A", "DEPT": [{ "NAME": "D1", "PROJECT": [{ "P_NAME": "xyz", "START_DATE": "1/5/2020 5:01:00", "END_DATE": "1/31/2020 7:21:32", "STATUS": "PASS" }] }] } I have been trying to set TIME_PREFIX and it's not working. As you can see there are two dates.When I set TIME_PREFIX= START_DATE i got time stamp error. How can i set TIME_PREFIX so that both the dates are included?
Hello, In Enterprise Security's Asset Center I'd like to create a new field called "Comment". The goal is to fill it with different information like serial number, OS, installation status, etc. ... See more...
Hello, In Enterprise Security's Asset Center I'd like to create a new field called "Comment". The goal is to fill it with different information like serial number, OS, installation status, etc. The goal is to make it looks like this: Install Status: Preproduction OS: Microsoft Windows Server 2019 Serial Number: ABCD1234 To keep field names I tried to use an eval function below: | eval comment="Install Status: " . install_status . ",OS: " . os . ",Serial Number: " . serial_number | rex mode=sed field=comment "s/,/\n/g" But unfortunately some of field values could be null which makes the final value of "Comment" field equal null (even if other fields are not empty). To avoid that I replaced eval with mvapprend : | eval comment=mvappend(install_status, os, serial_number) It helps to solve the null value issue, but now I have no idea how can I keep the field's name. Could you please help me to find a workaround? Thanks for the help.
Hi all, I am having major issues with creating drilldown to correlation searches, using tokens of the process paths. The problem is that splunk doesn't know how to refer to the "\". I have... See more...
Hi all, I am having major issues with creating drilldown to correlation searches, using tokens of the process paths. The problem is that splunk doesn't know how to refer to the "\". I have tried to modify the token and replace every "\" with "\", but with no luck. Does anyone knows how to workaround this issue ? Example for drilldown: | from datamodel:Endpoint.Processes | search process_path = $process_path $ AND dest=$dest$ ** $process_path$="C:\Program Files\Windows Defender Advanced Threat Protection\Classification\SenseCE.exe" Thanks in advance !