All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Let's say I have the following tables index=events _time event_id ip   index=connections _time ip_address user When users connect to the syste... See more...
Hello, Let's say I have the following tables index=events _time event_id ip   index=connections _time ip_address user When users connect to the system, it gets registered on table connections, with the time of the connection, user and ip_address asigned to that user. Users are attached to this ip until they disconnect. When a user disconnects and a new user connects to the system, the same ip_address can be asigned to the new user. Events are provoked and registered on table events, associated to an ip, also with the _time of the event and a unique event_id. ip and ip_address are the same address but with different names on each table.   If I want to obtain the user behind the ip that provoked a single event based on event_id (3000 on this example), which means the last user that was connected to the ip before the event happened, I would do: index=connections [ search index=events event_id="3000" | head 1 | eval latest=_time | eval earliest=relative_time(latest, "-24h") | eval ip_address=ip | return earliest latest ip_address event_id ] | sort by -_time | head 1 | table _time event_id ip_address user This gives a table with a single row, for example, with tables: connections 22:00 10.0.0.5 margarita 19:00 10.0.0.17 charles 11:00 10.0.0.5 alice   events 23:00 3002 10.0.0.5 20:00 3001 10.0.0.17 18:00 3000 10.0.0.5   I would get: 18:00 3000 10.0.0.5 alice   Where the search gets the last user (alice) that was connected to the ip (10.0.0.5) before the event _time (18:00), although there is a connection registered later with that ip.   Now, I would like to obtain this result for every row on events table, with fomat: _time event_id ip user   For example, for both tables examples above, I would like to get: 23:00 3002 10.0.0.5 margarita 20:00 3001 10.0.0.17 charles 18:00 3000 10.0.0.5 alice   It should be like a join ip - ip_address but having into account that the event _time defines which row of connections to be used, as there would be more than one row with the same ip_address. I have thought and tried different approachs, like adding multiple subsearches, using both JOIN and subsearch and using foreach command, but I always encounter a problem where I can't return more than one "latest", but I feel like there should be an easy way to achieve this, but I am not very expert with Splunk. Any ideas/hints of how could I achieve this? Thank you very much!
I have a time series data source where an alert writes an event indicating that the number of systems an account is logging into is increasing over a set window of time.  in each event series, it lis... See more...
I have a time series data source where an alert writes an event indicating that the number of systems an account is logging into is increasing over a set window of time.  in each event series, it lists all the machines including the new one that the account incremented by in a multivalue field.  Broken out by day, I'm trying to only display the unique machines e.g.  | stats values(IPS) as ips values(computer) as dvcs by user _time I tried to accomplish this using mvdedup but that is only capable of deduping multi-value in a given event not a full timeseries search result.  Would love any advise you may have to accomplish this
Sample text from a log that I'm searching: "store license for Store 123456 2022-03-27 02:01:59,649 [XNIO-2 task-3] ERROR" I'm trying to search for, and return, a store number that's associated w... See more...
Sample text from a log that I'm searching: "store license for Store 123456 2022-03-27 02:01:59,649 [XNIO-2 task-3] ERROR" I'm trying to search for, and return, a store number that's associated with a particular error. The following search successfully returns the store number (and count): index=* host="log*" "store license for" | rex field=_raw "Store\s(?P<storenumber>.*)" | stats count by storenumber But when I try to search for the storenumber along with error string that follows it, I get "no results found." Here's the search i'm trying: index=* host="log*" "store license for" | rex field=_raw "Store\s(?P<storenumber>.*)[\r\n]+2022\-03\-27\s02:01:59,649\s\[XNIO-2\stask-3\]\sERROR" | stats count by storenumber Splunk doesn't seem to like the newline character. I've tried \n and [r\n\] and others, but all with the same incorrect results.
Hi I have lots of data that I want to see on one screen, so I need to use a transpose. When I do this I cant add colours easily As you can see below this is what I want. when the % is over 10... See more...
Hi I have lots of data that I want to see on one screen, so I need to use a transpose. When I do this I cant add colours easily As you can see below this is what I want. when the % is over 100 it goes red and below 100 it goes green, but only for the columns with % in then. Any help would is great thanks.
Hello, I have a search that prints out a list of numbers in this format. [144 ==> 143] [145 ==> 144] [144 ==> 145] [145 ==> 144] [144 ==> 145] [143 ==> 144] [144 ==> 143] [143 ==> 144] [1... See more...
Hello, I have a search that prints out a list of numbers in this format. [144 ==> 143] [145 ==> 144] [144 ==> 145] [145 ==> 144] [144 ==> 145] [143 ==> 144] [144 ==> 143] [143 ==> 144] [144 ==> 143] [143 ==> 144] [142 ==> 143] [143 ==> 142] [144 ==> 143] I want to extract the last three digits after the ">" sign. For example, [144 ==> 143] turns into 143. Then I want a summation of those values, so I guess I need to turn it into an int. Here is what I have so far rex "==>(?<regexusers>.*)" Where regexusers is what is being saved. Any help will be greatly appreciated!!
I am attempting to get Splunk to recognize a specific column in a CSV as the _time column (Current_time) upon ingestion. ***Sample Data*** Some sample lines from the CSV in question: name,dpname... See more...
I am attempting to get Splunk to recognize a specific column in a CSV as the _time column (Current_time) upon ingestion. ***Sample Data*** Some sample lines from the CSV in question: name,dpname,last_login,create_date,modify_date,Current_time CAT-user,testing,,2021-01-29 09:47:42.340000000,2021-01-29 09:47:42.340000000,2022-03-24 13:18:36.390000000 master,test,,2021-09-16 11:09:21.597000000,2021-09-16 11:09:21.597000000,2022-03-24 13:18:36.390000000 model,database,,2003-04-08 09:10:42.287000000,2003-04-08 09:10:42.287000000,2022-03-24 13:18:36.390000000 Note that multiple columns include timestamps. I want Splunk to ingest them but not use them for _time. Only Current_time should be _time. ***Error*** With the below config I am getting DateParserVerbose warning messages in the splunkd log for the host the Universal Forwarder is sending the CSV from (my-host): 0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Fri Jan 29 15:35:24 2021). Context: source=C:\temp\acct_0001.csv|host=my-host|transforms_sql_report_acct|13995 ***Config*** We have a clustered environment with 1 search head, 1 deployment server, and 3 indexers. On the Host with Universal Forwarder installed (deployed to UF from deployment server /opt/splunk/etc/deployment-apps/): inputs.conf [monitor://C:\temp\acct_*.csv] index=department_index sourcetype=sql_report_acct crcSalt = <SOURCE> disabled = 0 Props and Transforms (Located on deployment server in /opt/splunk/etc/master-apps/_cluster/local. Confirmed deployed to all 3 indexers /opt/splunk/etc/slave-apps/_cluster/local): props.conf [sql_report_acct] BREAK_ONLY_BEFORE_DATE=null CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y-%m-%d %H:%M:%S.%9N TIME_PREFIX=^([^,]*,){5} category=Structured description=sourcetype for my input disabled=false pulldown_type=true REPORT-transforms_sql_report_acct = transforms_sql_report_acct transforms.conf [transforms_sql_report_acct] DELIMS = "," FIELDS = name, dpname, last_login, create_date, modify_date, Current_time   By my understanding timestamp recognition is processed during the data pipeline parsing phase on the indexers. Slave app local should also have the highest priority for configs so I am not quite sure what am I doing wrong here. How do I get the Current_time field recognized as _time? Thanks for your input!
Hi! I've installed DB Connect for the first time today and can successfully get data from Oracle. While testing I've created a couple DB connect inputs with new sourcestypes (some name is required ... See more...
Hi! I've installed DB Connect for the first time today and can successfully get data from Oracle. While testing I've created a couple DB connect inputs with new sourcestypes (some name is required during creation) that I no longer need and that seem to be interfering with data processing - ex. new columns that I've added in SQL are not visible in new events after data fetch. I'd like to delete those source types but: * they do not show up in Settings > Source Types * they do show up in `| metadata type=sourcetypes index=myidx` but I don't know how to delete them using SPL How can I delete these leftover zombie source types and basically start over. Is deleting the whole index the only way?
Hello,  I am trying to develop a splunk query.  But the query that needs to be run is based on another SPlunk query return empty result.  what command I can use?    thank you  
Hello,  I have logs where there are multiple values for two fields. This data looks like this example below for each event. dest user builtinadmin computer1 user1 user2 tru... See more...
Hello,  I have logs where there are multiple values for two fields. This data looks like this example below for each event. dest user builtinadmin computer1 user1 user2 true false         It comes from this raw data: <computer N=computer1 D=corp OS=Windows DC=false> <users> <user N='user1" builtinadmin="false" /> <user N="user2" builtinadmin="true" /> </users> </computer>   Is there a way to show the data like this instead where each user correctly correlates to the builinadmin value? dest user builtinadmin computer1 user1 true computer1 user2 false
I have this search where the splunk_check_hostnames.csv is a single column of hostnames with hostname as the header.     index=_internal sourcetype=splunkd earliest=-24h [| inputlookup spl... See more...
I have this search where the splunk_check_hostnames.csv is a single column of hostnames with hostname as the header.     index=_internal sourcetype=splunkd earliest=-24h [| inputlookup splunk_check_hostnames.csv] | stats count by hostname, version, os     It works nicely.  I'm trying to figure out how to do the below search without having to use a second lookup with the header being host.     index=os_* sourcetype=linux_secure OR source=WinEventLog:Security earliest=-24h [| inputlookup splunk_check_hostnames.csv] | stats count by host, index, sourcetype     Any thoughts? TIA, Joe
Guys if you help me to extract fields from the raw events in props.conf in HF, I tried  EXTRACT command seems my regex is not ok or not sure what is the issue. I want to extract field and give name ... See more...
Guys if you help me to extract fields from the raw events in props.conf in HF, I tried  EXTRACT command seems my regex is not ok or not sure what is the issue. I want to extract field and give name to them.  Regex I tried: ^(?:[^,\n]*,){7}(?<src_ip>[^,]+),(?<dst_ip>[^,]+)(?:[^:\n]*:){2}\d+,\d+,\d+,(?<src_port>\d+),(?<dst_port>\d+)(?:[^,\n]*,){5}(?<action>[^,]+)(?:[^,\n]*,){38} Also, ^(?:[^,\n]*,){7}src_ip=(?<src_ip>[^,]+),dst_ip=(?<dst_ip>[^,]+)(?:[^:\n]*:){2}\d+,\d+,\d+,src_port=(?<src_port>\d+),dst_port=(?<dst_port>\d+)(?:[^,\n]*,){5}action=(?<action>[^,]+)(?:[^,\n]*,){38} Sample log:  Mar 31 18:18:35 LUM-EVERE-PAFW-R8-17-T1 1,2022/03/31 18:18:35,015701001564,TRAFFIC,drop,2305,2022/03/31 18:18:35,10.81.13.68,34.240.162.53,0.0.0.0,0.0.0.0,prodedfl_access_1289,,,not-applicable,vsys4,prodedfl,prodcore,ae1.1512,,Syslog_Server,2022/03/31 18:18:35,0,1,60353,443,0,0,0x0,tcp,deny,66,66,0,1,2022/03/31 18:18:35,0,any,0,7022483376390954281,0x8000000000000000,10.0.0.0-10.255.255.255,Ireland,0,1,0,policy-deny,920,0,0,0,Production,LUM-EVERE-PAFW-R8-17-T1,from-policy,,,0,,0,,N/A,0,0,0,0,2d8c02f8-e86f-43cf-a459-01acdb26580a,0,0,,,,,,, Please help me to extract fields like src_ip, dst_ip, src_port, dst_port, action etc.
I've installed and configured the Cisco AMP for Endpoints Events Input app 2.0.2, and the API calls seem to work, but data isn't coming in, instead repetitively logging into $SPLUNK_HOME/var/log/splu... See more...
I've installed and configured the Cisco AMP for Endpoints Events Input app 2.0.2, and the API calls seem to work, but data isn't coming in, instead repetitively logging into $SPLUNK_HOME/var/log/splunk/amp4e_events_input.log the following messages: 2022-03-31 11:35:05,815 ERROR Amp4eEvents - Consumer Error that does not look like connection failure! See the traceback below. 2022-03-31 11:35:05,816 ERROR Amp4eEvents - Traceback (most recent call last):   File "/opt/splunk/etc/apps/amp4e_events_input/bin/util/stream_consumer.py", line 34, in run     self._connection = pika.BlockingConnection(pika.URLParameters(self._url))   File "/opt/splunk/etc/apps/amp4e_events_input/bin/pika/adapters/blocking_connection.py", line 377, in __init__     self._process_io_for_connection_setup()   File "/opt/splunk/etc/apps/amp4e_events_input/bin/pika/adapters/blocking_connection.py", line 417, in _process_io_for_connection_setup     self._open_error_result.is_ready)   File "/opt/splunk/etc/apps/amp4e_events_input/bin/pika/adapters/blocking_connection.py", line 469, in _flush_output     raise maybe_exception pika.exceptions.ProbableAuthenticationError: (403, 'ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.')  I don't know what broker logfile it's suggesting I reference, or how to fix this error since the authentication type is hard-coded in the app.  All the errors I'm finding when I search relate to RabbitMQ.
Hi, we have a severe performance issue with dbxlookup in DB Connect App 3.8 for a MySQL DB. dbxlookups in a Splunk Query take several minutes to return results.  The strange thing is, that it only... See more...
Hi, we have a severe performance issue with dbxlookup in DB Connect App 3.8 for a MySQL DB. dbxlookups in a Splunk Query take several minutes to return results.  The strange thing is, that it only happens when using dbxlookup. Using dbxquery is blazing fast (1.4 seconds to return 100K results) and also when configuring the lookup in the DB Connect app (where you run the actual SQL query that is to be used for the lookup) it is extremely fast. Example of using dbxlookup in a search:   | makeresults | eval page_id=510376245 | dbxlookup connection="myDB" query="SELECT C.CONTENTID as content_id,C.TITLE as page_title, S.SPACEKEY as space_key,S.SPACENAME as space_name FROM CONTENT AS C LEFT JOIN SPACES AS S ON (C.SPACEID=S.SPACEID) ORDER BY C.CONTENTID DESC" "content_id" AS "page_id"   The search job inspector shows the long time the dbxlookup took. The search log does not yield any helpful information:   03-31-2022 16:27:11.988 INFO SearchParser [110743 StatusEnforcerThread] - PARSING: table _time, page_id 03-31-2022 16:27:11.988 INFO SearchParser [110743 StatusEnforcerThread] - PARSING: head 20 03-31-2022 16:27:11.988 INFO SearchParser [110743 StatusEnforcerThread] - PARSING: dbxlookup lookup="scwiki_db" 03-31-2022 16:27:11.988 INFO ChunkedExternProcessor [110743 StatusEnforcerThread] - Running process: /opt/splunk/jdk1.8.0_111/bin/java -Dlogback.configurationFile\=../config/command_logback.xml -DDBX_COMMAND_LOG_LEVEL\=DEBUG -cp ../jars/dbxquery.jar com.splunk.dbx.command.DbxLookupCommand 03-31-2022 16:27:12.585 INFO DispatchExecutor [110743 StatusEnforcerThread] - BEGIN OPEN: Processor=dbxlookup 03-31-2022 16:27:12.605 INFO DispatchExecutor [110743 StatusEnforcerThread] - END OPEN: Processor=dbxlookup 03-31-2022 16:27:12.605 INFO ChunkedExternProcessor [110743 StatusEnforcerThread] - Skipping custom search command since we are in preview mode: dbxlookup 03-31-2022 16:27:12.620 INFO PreviewExecutor [110743 StatusEnforcerThread] - Finished preview generation in 0.663433841 seconds. 03-31-2022 16:27:13.143 INFO DispatchExecutor [110818 phase_1] - END OPEN: Processor=table 03-31-2022 16:27:13.143 INFO DispatchExecutor [110818 phase_1] - BEGIN OPEN: Processor=dbxlookup 03-31-2022 16:27:13.364 INFO DispatchExecutor [110818 phase_1] - END OPEN: Processor=dbxlookup 03-31-2022 16:27:14.627 INFO ReducePhaseExecutor [110743 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW -> here the delay happens 03-31-2022 16:29:44.577 INFO PreviewExecutor [110743 StatusEnforcerThread] - Stopping preview triggers since search almost finished 03-31-2022 16:29:44.580 INFO DownloadRemoteDataTransaction [110818 phase_1] - Downloading logs from all remote event providers 03-31-2022 16:29:44.849 INFO ReducePhaseExecutor [110818 phase_1] - Downloading all remote search.log files took 0.270 seconds 03-31-2022 16:29:44.850 INFO DownloadRemoteDataTransaction [110818 phase_1] - Downloading logs from all remote event providers     So it must have something to do with the way dbxlookup works. Could be an java issue or an mysql driver issue, a combination of both or something completely different :-). We are using the latest DB Connect MySQL Add-On. I am grateful for any hints or tipps on how to troubleshoot this furter. Or an actual solution :-).
Hi, we are running a distributed Splunk environment and do monitor the messages which appearing when there are issues within the ecosystem.  We did read about how to customize messages and official... See more...
Hi, we are running a distributed Splunk environment and do monitor the messages which appearing when there are issues within the ecosystem.  We did read about how to customize messages and official Splunk docs for messages.conf but weren't able to receive good answers to that. Maybe one of you does have more experience with that https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Customizeuserexperience https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Messagesconf Can someone help to explain those parameters and the behavior? target = [auto|ui|log|ui,log|none] * Sets the message display target. * "auto" means the message display target is automatically determined by context. * "ui" messages are displayed in Splunk Web and can be passed on from search peers to search heads in a distributed search environment. * "log" messages are displayed only in the log files for the instance under the BulletinBoard component, with log levels that respect their message severity. For example, messages with severity "info" are displayed as INFO log entries. * "ui,log" combines the functions of the "ui" and "log" options. * "none" completely hides the message. (Please consider using "log" and reducing severity instead. Using "none" might impact diagnosability.) * Default: auto I try to find a way to control if messages are getting distributed to another instance like Monitoring Console or if they should only appear on the system where the issue  happend. Is that possible? Where do I find those event if I select "log" as parameter? do they appear only in splunkd.log? Thanks    
My company is using Splunk to store data for our apps, and we would like to use Tableau to build visualizations. I have installed the driver for Splunk, but I'm not clear on the required credentials,... See more...
My company is using Splunk to store data for our apps, and we would like to use Tableau to build visualizations. I have installed the driver for Splunk, but I'm not clear on the required credentials, which are server, port, username, and password. I have access to our company's Splunk Enterprise, but It's automatically logged in once I connect to my company's VPN, which means I do not know my username or password. I also have difficulty finding the server and port. I tried "splunk.xxx(my company's name).com" as the server, 8089 as the port, and my corporate credentials as username and password; however, it didn't work. Can someone help me with this problem? Thanks a lot.
Hi,  I need to display an overall status in a dashboard (Single Value) based on results returned from my splunk queries.  Example: If all status OK - Overall status=OK If  one or more statu... See more...
Hi,  I need to display an overall status in a dashboard (Single Value) based on results returned from my splunk queries.  Example: If all status OK - Overall status=OK If  one or more status is Failed and all other are OK (i.e no Job in Pending) - Overall Status=Failure If one or more status is in Failed and one or more is in Pending, Overall Status=Partial OK If all are Pending - Overall status=Pending Job Status A OK B OK C Failed D Pending   Any suggestions if the above is possible? 
Hi all, as in the previous posts I and II I'd like to anonymize names of cities and to keep the length of a string. The nature of logs is quite complex. I'm sharing the part in question: 2022-03... See more...
Hi all, as in the previous posts I and II I'd like to anonymize names of cities and to keep the length of a string. The nature of logs is quite complex. I'm sharing the part in question: 2022-03-31 15:23:11,210 INFO ...  - ... 381 lines omitted ... F_AUSWEISENDE=12.02.2022  F_AUSWEISNUMMER=A2A2A2AAA F_BEHOERDE=Berlin F_BV_FREITEXTANTRAG= --------------- What I'd like to get is: 2022-03-31 15:23:11,210 INFO ...  - ... 381 lines omitted ... F_AUSWEISENDE=12.02.2022  F_AUSWEISNUMMER=A2A2A2AAA F_BEHOERDE=XXXXXX F_BV_FREITEXTANTRAG= --------------- Sometimes, unfortunately, the names are more complex and include processing errors: F_BEHOERDE=Stadt Rastatt B\xFCrgerb\xFCro then I'd like to get: F_BEHOERDE=XXXXX XXXXXXX XXXXXXXXXXXXXXXX I've managed to create the regex which anonymizes city names but doesn't keep the length of them. If the dynamic version is not possible. Probably I will need to stick with this: s/F_BEHOERDE=.*/F_BEHOERDE=XXXXX/g  I'll be grateful for any hints
Hi, Is it possible to use Python (or other languages) to get logs that originated from specific hosts? For example, search for a list of hosts and return the logs that were ingested during a spec... See more...
Hi, Is it possible to use Python (or other languages) to get logs that originated from specific hosts? For example, search for a list of hosts and return the logs that were ingested during a specific date range. Thanks !   
The Splunk api trying to get data from the remote client api server, but it's showing SSL untrusted error. We want splunk to check or use the CA certificate copied into system and it should trust it,... See more...
The Splunk api trying to get data from the remote client api server, but it's showing SSL untrusted error. We want splunk to check or use the CA certificate copied into system and it should trust it, we need system path where we can copy the CA certificate file which will automatically use by splunk whenever it call to that remote api server.
Background In our company,  Splunk is owned by devops. I don't have the access to develop Splunk(like Splunk Dev). I can only use it and can't do or argue anything about Splunk settings! Many comman... See more...
Background In our company,  Splunk is owned by devops. I don't have the access to develop Splunk(like Splunk Dev). I can only use it and can't do or argue anything about Splunk settings! Many commands like 'eventstats' cannot be run due to space limit. For all that, we want to mine some useful data in log files(we cannot get the log files directly but can only get by Splunk, by the way). We want to find the potential bugs before the customers encountered them. Problems I tried to get the raw log events files by running the command which is simple but can get all events, after it finished, I clicked the "download" button. But some files are too big to download(10GB mostly)! So I want to find a way to run Splunk spider program to get the raw events. But I know this field of Splunk poorly. Have you tried this, or if you can think out another automated or half-automated solution ? Thanks!