All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I am facing an issue related to time zone interpretation, one server which is configured with CET and sending log splunk cloud (in my best knowledge indexers are placed in GMT timezone). Th... See more...
Hi All, I am facing an issue related to time zone interpretation, one server which is configured with CET and sending log splunk cloud (in my best knowledge indexers are placed in GMT timezone). This server sends syslogs to SC4S servers configured with GMT time zone. Event Time value in splunk is being picked as per the raw event time. Since splunk indexers are GMT, SC4S is in GMT, I am getting time difference between event time (server time/ CET time zone) and index time (GMT time zone). please help, how can I resolve this issue of huge time difference in event time and index time.   Thanks, Bhaskar
I am investigating how to have a continuous build process for our Splunk addon and I saw that there are 3 options: slim -a cli tool that is a part of  Package apps with the Packaging Toolkit | Docu... See more...
I am investigating how to have a continuous build process for our Splunk addon and I saw that there are 3 options: slim -a cli tool that is a part of  Package apps with the Packaging Toolkit | Documentation | Splunk Developer Program AppInspect - a part of Validate quality of Splunk apps | Documentation | Splunk Developer Program Add-on Builder - Install the Add-on Builder - Splunk Documentation slim gave me very little output and it wasn't clear what sort of validations it was running. app-inspect is very configurable and provides rich output so I'm pretty happy with it. However, I got a recommendation to trust the Add-on Builder's validations. Unfortunately, apart from using some Selenium manipulations to touch the Splunk UI, I wasn't able to identify a way to automatically call it validation logic from an HTTP API or a cli. Finally, the output of AppInspect & the Add-on Builder differs - I'm currently checking why this is so. Perhaps the validations are completely different ... So my questions to the community are: 1. What is the best approach to validate a Splunk add-on? 2. How would you recommend automating at least the validation part of the process? Thank you so much in advance!
Hi Community,   We have encountered a weird case with the curl command. One of the users was running a curl command to get a response from a server and run an SPL search on it. Since the time limit... See more...
Hi Community,   We have encountered a weird case with the curl command. One of the users was running a curl command to get a response from a server and run an SPL search on it. Since the time limit was not mentioned on the curl command, the response was used to go back in time and retrieve all the data ever produced from the URL. This process would take time to retrieve information and would die in the server. This was found once we had a slowness issue with the server and had added the time parameters. Now the issue with slowness is fixed. However, I would like to check on the possibility to get the list of curl commands executed in Splunk. Or are some other alternatives to get the list of curl commands getting executed in Splunk? Thanks in advance.   Regards, Pravin
Hello I use a very basic search on a short period like below but  I am a little surprised by the quota size used by this search (350 MO for 148000 events between 7h and 13h)    index=tutu sourc... See more...
Hello I use a very basic search on a short period like below but  I am a little surprised by the quota size used by this search (350 MO for 148000 events between 7h and 13h)    index=tutu sourcetype="toto" type=x earliest=@d+7h latest=@d+19h | fields sam | eval sam=lower(s) | stats dc(s)    So I try to find some tracks for reducing the quota size Is anybody have an idea please?
I have a long event which I tried to extract fields from, using splunk's extract additional fields feature.  I chose comma delimited extraction and named the fields appropriately. I have 117 fields... See more...
I have a long event which I tried to extract fields from, using splunk's extract additional fields feature.  I chose comma delimited extraction and named the fields appropriately. I have 117 fields altogether and when I to display the fields with the table command, I noticed that there are a couple of data-to-field mismatches.    field3 value is replicated for field5.  field4 value is replicated for field8. Please refer the screenshot for better understanding: regex error I have checked the transforms.conf and that looks fine. I'm not sure how to get over this issue. Any help in guiding towards the right solution will be highly appreciated. 
I have  messages like below in logs, I want to extract ErrorCode from Those messages, Here ErrorCode is CIS-46031 However there could be space right after ErrorCode or after ErrorCode:  msg: Erro... See more...
I have  messages like below in logs, I want to extract ErrorCode from Those messages, Here ErrorCode is CIS-46031 However there could be space right after ErrorCode or after ErrorCode:  msg: ErrorCode:CIS-46031,ErrorMessage:Some unknown error occurred in outage daemon request. Please check.,Error occurred in CIS domain events outage processing. msg: ErrorCode : CIS-46032,ErrorMessage:Some unknown error occurred in outage daemon request.  msg: ErrorCode :CIS-46033, ErrorMessage:Some unknown error occurred in outage daemon request.  How can we do the same in Splunk
Hi,   Is there a way to connect Splunk Connect for Kubernetes to HEC on Splunk Cloud instance through a HTTP(S) Proxy ?   Is it possible to use `environmentVar:` in values.yml file ? If yes... See more...
Hi,   Is there a way to connect Splunk Connect for Kubernetes to HEC on Splunk Cloud instance through a HTTP(S) Proxy ?   Is it possible to use `environmentVar:` in values.yml file ? If yes, what are the variables and the format to use ?   Regards. Nicolas.
Hello, does anybody know how to set shared axis ranges for more metrics using dual-axis chart?  Actually, when I add more than 1 metric on the right (or left) axis, the widget creates ranges for ... See more...
Hello, does anybody know how to set shared axis ranges for more metrics using dual-axis chart?  Actually, when I add more than 1 metric on the right (or left) axis, the widget creates ranges for every metric, which is unusable. Use case: I want to use the LEFT axis for Response Time (line) my Bussines Transaction and the RIGHT axis for Calls per minute (column) and Errors per minute (column) my Bussines Transaction. Actually, for now, looks like my BT has a 100% fail rate, but there were only 2 errors. Thanks for any advice Tomas B.
How to know the last event's time from each of the hosts in the system?.  The output can be of the below format? host1|datetime host2|datetime   thank you
Dears I need an advice from experts who have past experience on splunk, Please do not advise for splunk professional services or Partner help,  How i can measure approximately the source device... See more...
Dears I need an advice from experts who have past experience on splunk, Please do not advise for splunk professional services or Partner help,  How i can measure approximately the source device is generating how much number of data that i need to  ingest in splunk , there must be some way to assume till some extend for example a firewall will generate more logs than a windows server. Lets assume if i m ingesting a 300GB/day in splunk and i have 5  administrative users using search head then the highlighted below is good to follow. If i am adding Enterprise security module then the sizing changes,?? how much additional  data ingestion needs to be added and what is the math behind this ? thanks
Hello, Let's say I have the following tables index=events _time event_id ip   index=connections _time ip_address user When users connect to the syste... See more...
Hello, Let's say I have the following tables index=events _time event_id ip   index=connections _time ip_address user When users connect to the system, it gets registered on table connections, with the time of the connection, user and ip_address asigned to that user. Users are attached to this ip until they disconnect. When a user disconnects and a new user connects to the system, the same ip_address can be asigned to the new user. Events are provoked and registered on table events, associated to an ip, also with the _time of the event and a unique event_id. ip and ip_address are the same address but with different names on each table.   If I want to obtain the user behind the ip that provoked a single event based on event_id (3000 on this example), which means the last user that was connected to the ip before the event happened, I would do: index=connections [ search index=events event_id="3000" | head 1 | eval latest=_time | eval earliest=relative_time(latest, "-24h") | eval ip_address=ip | return earliest latest ip_address event_id ] | sort by -_time | head 1 | table _time event_id ip_address user This gives a table with a single row, for example, with tables: connections 22:00 10.0.0.5 margarita 19:00 10.0.0.17 charles 11:00 10.0.0.5 alice   events 23:00 3002 10.0.0.5 20:00 3001 10.0.0.17 18:00 3000 10.0.0.5   I would get: 18:00 3000 10.0.0.5 alice   Where the search gets the last user (alice) that was connected to the ip (10.0.0.5) before the event _time (18:00), although there is a connection registered later with that ip.   Now, I would like to obtain this result for every row on events table, with fomat: _time event_id ip user   For example, for both tables examples above, I would like to get: 23:00 3002 10.0.0.5 margarita 20:00 3001 10.0.0.17 charles 18:00 3000 10.0.0.5 alice   It should be like a join ip - ip_address but having into account that the event _time defines which row of connections to be used, as there would be more than one row with the same ip_address. I have thought and tried different approachs, like adding multiple subsearches, using both JOIN and subsearch and using foreach command, but I always encounter a problem where I can't return more than one "latest", but I feel like there should be an easy way to achieve this, but I am not very expert with Splunk. Any ideas/hints of how could I achieve this? Thank you very much!
I have a time series data source where an alert writes an event indicating that the number of systems an account is logging into is increasing over a set window of time.  in each event series, it lis... See more...
I have a time series data source where an alert writes an event indicating that the number of systems an account is logging into is increasing over a set window of time.  in each event series, it lists all the machines including the new one that the account incremented by in a multivalue field.  Broken out by day, I'm trying to only display the unique machines e.g.  | stats values(IPS) as ips values(computer) as dvcs by user _time I tried to accomplish this using mvdedup but that is only capable of deduping multi-value in a given event not a full timeseries search result.  Would love any advise you may have to accomplish this
Sample text from a log that I'm searching: "store license for Store 123456 2022-03-27 02:01:59,649 [XNIO-2 task-3] ERROR" I'm trying to search for, and return, a store number that's associated w... See more...
Sample text from a log that I'm searching: "store license for Store 123456 2022-03-27 02:01:59,649 [XNIO-2 task-3] ERROR" I'm trying to search for, and return, a store number that's associated with a particular error. The following search successfully returns the store number (and count): index=* host="log*" "store license for" | rex field=_raw "Store\s(?P<storenumber>.*)" | stats count by storenumber But when I try to search for the storenumber along with error string that follows it, I get "no results found." Here's the search i'm trying: index=* host="log*" "store license for" | rex field=_raw "Store\s(?P<storenumber>.*)[\r\n]+2022\-03\-27\s02:01:59,649\s\[XNIO-2\stask-3\]\sERROR" | stats count by storenumber Splunk doesn't seem to like the newline character. I've tried \n and [r\n\] and others, but all with the same incorrect results.
Hi I have lots of data that I want to see on one screen, so I need to use a transpose. When I do this I cant add colours easily As you can see below this is what I want. when the % is over 10... See more...
Hi I have lots of data that I want to see on one screen, so I need to use a transpose. When I do this I cant add colours easily As you can see below this is what I want. when the % is over 100 it goes red and below 100 it goes green, but only for the columns with % in then. Any help would is great thanks.
Hello, I have a search that prints out a list of numbers in this format. [144 ==> 143] [145 ==> 144] [144 ==> 145] [145 ==> 144] [144 ==> 145] [143 ==> 144] [144 ==> 143] [143 ==> 144] [1... See more...
Hello, I have a search that prints out a list of numbers in this format. [144 ==> 143] [145 ==> 144] [144 ==> 145] [145 ==> 144] [144 ==> 145] [143 ==> 144] [144 ==> 143] [143 ==> 144] [144 ==> 143] [143 ==> 144] [142 ==> 143] [143 ==> 142] [144 ==> 143] I want to extract the last three digits after the ">" sign. For example, [144 ==> 143] turns into 143. Then I want a summation of those values, so I guess I need to turn it into an int. Here is what I have so far rex "==>(?<regexusers>.*)" Where regexusers is what is being saved. Any help will be greatly appreciated!!
I am attempting to get Splunk to recognize a specific column in a CSV as the _time column (Current_time) upon ingestion. ***Sample Data*** Some sample lines from the CSV in question: name,dpname... See more...
I am attempting to get Splunk to recognize a specific column in a CSV as the _time column (Current_time) upon ingestion. ***Sample Data*** Some sample lines from the CSV in question: name,dpname,last_login,create_date,modify_date,Current_time CAT-user,testing,,2021-01-29 09:47:42.340000000,2021-01-29 09:47:42.340000000,2022-03-24 13:18:36.390000000 master,test,,2021-09-16 11:09:21.597000000,2021-09-16 11:09:21.597000000,2022-03-24 13:18:36.390000000 model,database,,2003-04-08 09:10:42.287000000,2003-04-08 09:10:42.287000000,2022-03-24 13:18:36.390000000 Note that multiple columns include timestamps. I want Splunk to ingest them but not use them for _time. Only Current_time should be _time. ***Error*** With the below config I am getting DateParserVerbose warning messages in the splunkd log for the host the Universal Forwarder is sending the CSV from (my-host): 0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Fri Jan 29 15:35:24 2021). Context: source=C:\temp\acct_0001.csv|host=my-host|transforms_sql_report_acct|13995 ***Config*** We have a clustered environment with 1 search head, 1 deployment server, and 3 indexers. On the Host with Universal Forwarder installed (deployed to UF from deployment server /opt/splunk/etc/deployment-apps/): inputs.conf [monitor://C:\temp\acct_*.csv] index=department_index sourcetype=sql_report_acct crcSalt = <SOURCE> disabled = 0 Props and Transforms (Located on deployment server in /opt/splunk/etc/master-apps/_cluster/local. Confirmed deployed to all 3 indexers /opt/splunk/etc/slave-apps/_cluster/local): props.conf [sql_report_acct] BREAK_ONLY_BEFORE_DATE=null CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TIME_FORMAT=%Y-%m-%d %H:%M:%S.%9N TIME_PREFIX=^([^,]*,){5} category=Structured description=sourcetype for my input disabled=false pulldown_type=true REPORT-transforms_sql_report_acct = transforms_sql_report_acct transforms.conf [transforms_sql_report_acct] DELIMS = "," FIELDS = name, dpname, last_login, create_date, modify_date, Current_time   By my understanding timestamp recognition is processed during the data pipeline parsing phase on the indexers. Slave app local should also have the highest priority for configs so I am not quite sure what am I doing wrong here. How do I get the Current_time field recognized as _time? Thanks for your input!
Hi! I've installed DB Connect for the first time today and can successfully get data from Oracle. While testing I've created a couple DB connect inputs with new sourcestypes (some name is required ... See more...
Hi! I've installed DB Connect for the first time today and can successfully get data from Oracle. While testing I've created a couple DB connect inputs with new sourcestypes (some name is required during creation) that I no longer need and that seem to be interfering with data processing - ex. new columns that I've added in SQL are not visible in new events after data fetch. I'd like to delete those source types but: * they do not show up in Settings > Source Types * they do show up in `| metadata type=sourcetypes index=myidx` but I don't know how to delete them using SPL How can I delete these leftover zombie source types and basically start over. Is deleting the whole index the only way?
Hello,  I am trying to develop a splunk query.  But the query that needs to be run is based on another SPlunk query return empty result.  what command I can use?    thank you  
Hello,  I have logs where there are multiple values for two fields. This data looks like this example below for each event. dest user builtinadmin computer1 user1 user2 tru... See more...
Hello,  I have logs where there are multiple values for two fields. This data looks like this example below for each event. dest user builtinadmin computer1 user1 user2 true false         It comes from this raw data: <computer N=computer1 D=corp OS=Windows DC=false> <users> <user N='user1" builtinadmin="false" /> <user N="user2" builtinadmin="true" /> </users> </computer>   Is there a way to show the data like this instead where each user correctly correlates to the builinadmin value? dest user builtinadmin computer1 user1 true computer1 user2 false
I have this search where the splunk_check_hostnames.csv is a single column of hostnames with hostname as the header.     index=_internal sourcetype=splunkd earliest=-24h [| inputlookup spl... See more...
I have this search where the splunk_check_hostnames.csv is a single column of hostnames with hostname as the header.     index=_internal sourcetype=splunkd earliest=-24h [| inputlookup splunk_check_hostnames.csv] | stats count by hostname, version, os     It works nicely.  I'm trying to figure out how to do the below search without having to use a second lookup with the header being host.     index=os_* sourcetype=linux_secure OR source=WinEventLog:Security earliest=-24h [| inputlookup splunk_check_hostnames.csv] | stats count by host, index, sourcetype     Any thoughts? TIA, Joe