All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi! Been struggling a lot with a pretty simple problem but my SPLUNK REX skills are insufficient for the task. I want to match and list ANY value containing both letters, digits and characters betwee... See more...
Hi! Been struggling a lot with a pretty simple problem but my SPLUNK REX skills are insufficient for the task. I want to match and list ANY value containing both letters, digits and characters between parenthesis at the end of line/end of string - examples: bla bla bla (My Value0/0) bla bla blb (My OtherValue0/1) bla blb blc (My thirdValue0/0/0/0) As you can see - the text BEFORE the ending value inside parenthesis can be what ever. There can also be MULTIPLE similar values also within parenthesis along the string but I ONLY want to match the one at end of line ($). The match must be every letter, space, number or typically "/" characters between the parenthesis. Using other regex dev tools I get a fairly decent result with a simple string like this: \(.*\)$ \( matches the character ( with index 4010 (2816 or 508) literally (case sensitive) . matches any character (except for line terminators) * matches the previous token between zero and unlimited times, as many times as possible, giving back as needed (greedy) \) matches the character ) with index 4110 (2916 or 518) literally (case sensitive) $ asserts position at the end of a line   I also have used variants of this and they all end up working very well in regex testers and dev tools and also in LINUX (when pasting the entire table of messages into a file and applying them. But not in SPLUNK - I believe there is a big coin drop along my SPLUNK use path when everything will make sense to me, unfortunately not there yet. Please help me out!
Hello, We recently upgraded our controller to version 21.4.8-1411. After the upgrade however, our SMS alerts are not working.  According to the health rule's Evaluation Events > Actions Executed, ... See more...
Hello, We recently upgraded our controller to version 21.4.8-1411. After the upgrade however, our SMS alerts are not working.  According to the health rule's Evaluation Events > Actions Executed, it says "SMS Message Sent" but we're not getting any text alerts. Is this a known issue?
I have somewhat of an unwieldy log file I'm trying to wrangle. Each log entry is contained between two lines like so: <TIMESTAMP> BEGIN LOG DECODE log data log data log data <TIMESTAMP> END LOG ... See more...
I have somewhat of an unwieldy log file I'm trying to wrangle. Each log entry is contained between two lines like so: <TIMESTAMP> BEGIN LOG DECODE log data log data log data <TIMESTAMP> END LOG DECODE   What's the best way to grab everything in between and start to extract fields and such?
Hey all, I have 2 source types with the following properties source_1 id value source_2 name description So my events might look similar to: source_1: id=abc-123, value="blah" source_2... See more...
Hey all, I have 2 source types with the following properties source_1 id value source_2 name description So my events might look similar to: source_1: id=abc-123, value="blah" source_2: name=abc-123, description="some_description" The values of source_1.id and source_2.name are equal. Im trying to display the id/name, description and value in a table. I've came up with the following query to do so:   index=main sourcetype=source_2 | rename name AS id | join id [search index=main sourcetype=source_1 id=*] | table id, value, description   Is my query the best way to achieve this? Are there any alternatives?
Hi All, I had this error at it took a while to understand and fix it. Here my environment: Splunk 8.0.5 Splunk DB Connect 3.6.0 Java /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.275.b01-0.el6_10.x86_6... See more...
Hi All, I had this error at it took a while to understand and fix it. Here my environment: Splunk 8.0.5 Splunk DB Connect 3.6.0 Java /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.275.b01-0.el6_10.x86_64/jre/bin/java Red Hat Enterprise Linux Server release 6.10 (Santiago) Target DB is PostgreSQL We have several query all properly running, just one was giving the error. The query is the following:   index=myindex sourcetype=mysourcetype etc… | dbxoutput output=my_stanza   “my_stanza” refers to one present on db_outputs.conf The error in Splunk Search Head was:   rx.exceptions.OnErrorNotImplementedException at rx.internal.util.InternalObservableUtils$ErrorNotImplementedAction.call(InternalObservableUtils.java:386) at rx.internal.util.InternalObservableUtils$ErrorNotImplementedAction.call(InternalObservableUtils.java:383) at rx.internal.util.ActionSubscriber.onError(ActionSubscriber.java:44) at rx.observers.SafeSubscriber._onError(SafeSubscriber.java:153) at rx.observers.SafeSubscriber.onError(SafeSubscriber.java:115) at rx.exceptions.Exceptions.throwOrReport(Exceptions.java:212) at rx.observers.SafeSubscriber.onNext(SafeSubscriber.java:139) at rx.internal.operators.OperatorBufferWithSize$BufferExact.onCompleted(OperatorBufferWithSize.java:128) at rx.internal.operators.OnSubscribeMap$MapSubscriber.onCompleted(OnSubscribeMap.java:97) at rx.internal.operators.OperatorPublish$PublishSubscriber.checkTerminated(OperatorPublish.java:423) at rx.internal.operators.OperatorPublish$PublishSubscriber.dispatch(OperatorPublish.java:505) at rx.internal.operators.OperatorPublish$PublishSubscriber.onCompleted(OperatorPublish.java:305) at rx.internal.operators.OnSubscribeFromIterable$IterableProducer.slowPath(OnSubscribeFromIterable.java:134) at rx.internal.operators.OnSubscribeFromIterable$IterableProducer.request(OnSubscribeFromIterable.java:89) at rx.Subscriber.setProducer(Subscriber.java:211) at rx.internal.operators.OnSubscribeFromIterable.call(OnSubscribeFromIterable.java:63) at rx.internal.operators.OnSubscribeFromIterable.call(OnSubscribeFromIterable.java:34) at rx.Observable.unsafeSubscribe(Observable.java:10327) at rx.internal.operators.OperatorPublish.connect(OperatorPublish.java:214) at rx.observables.ConnectableObservable.connect(ConnectableObservable.java:52) at com.splunk.dbx.command.DbxOutputCommand.process(DbxOutputCommand.java:161) at com.splunk.search.command.StreamingCommand.process(StreamingCommand.java:58) at com.splunk.search.command.ChunkedCommandDriver.execute(ChunkedCommandDriver.java:109) at com.splunk.search.command.AbstractSearchCommand.run(AbstractSearchCommand.java:50) at com.splunk.search.command.StreamingCommand.run(StreamingCommand.java:16) at com.splunk.dbx.command.DbxOutputCommand.main(DbxOutputCommand.java:100) Caused by: java.lang.NullPointerException at java.math.BigDecimal.<init>(BigDecimal.java:809) at com.splunk.dbx.service.output.OutputServiceImpl.setParameterAsObject(OutputServiceImpl.java:288) at com.splunk.dbx.service.output.OutputServiceImpl.setParameter(OutputServiceImpl.java:270) at com.splunk.dbx.service.output.OutputServiceImpl.processInsertion(OutputServiceImpl.java:216) at com.splunk.dbx.service.output.OutputServiceImpl.output(OutputServiceImpl.java:76) at rx.internal.util.ActionSubscriber.onNext(ActionSubscriber.java:39) at rx.observers.SafeSubscriber.onNext(SafeSubscriber.java:134) ... 19 more     Looking at search.log from job inspector:   12-03-2021 17:26:18.187 INFO DispatchExecutor - END OPEN: Processor=noop 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: Exception in thread "main" java.lang.IllegalStateException: I/O operation on closed writer 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.search.command.AbstractWriteHandler.checkValidity(AbstractWriteHandler.java:100) 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.search.command.AbstractWriteHandler.flush(AbstractWriteHandler.java:228) 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.search.command.ChunkedWriteHandler.flush(ChunkedWriteHandler.java:69) 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.search.command.AbstractWriteHandler.close(AbstractWriteHandler.java:233) 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.search.command.ChunkedCommandDriver.execute(ChunkedCommandDriver.java:120) 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.search.command.AbstractSearchCommand.run(AbstractSearchCommand.java:50) 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.search.command.StreamingCommand.run(StreamingCommand.java:16) 12-03-2021 17:26:18.188 ERROR ChunkedExternProcessor - stderr: at com.splunk.dbx.command.DbxOutputCommand.main(DbxOutputCommand.java:100)     I solved in this way (adding fillnull):   index=myindex sourcetype=mysourcetype etc… | fillnull value=0.00 mbytes_in | fillnull value=0.00 mbytes_out | dbxoutput output=my_stanza     There were 2 records in the extraction having "mbytes_in" and "mbytes_out" fields without any value. I am sure before upgrading to Splunk DB Connect 3.6.0 it was working properly. The target DB is a PostgreSQL and the table is defined as below, as you can see "mbytes_in" and "mbytes_out" can accept NULL values (and I can see several records in the PostgreSQL DB populated in the past with "mbytes_in" and "mbytes_out" having NULL values) Here the table definition in PostgreSQL:   CREATE TABLE myschema.mytable ( field01 integer NOT NULL, field02 character varying(6) NOT NULL, field03 character varying(6), field04 character varying(15) NOT NULL, field05 timestamp(6) with time zone NOT NULL, mbytes_in numeric(12, 2), mbytes_out numeric(12, 2), field06 character varying(15) NOT NULL, field07 character varying(50), field08 character varying(50), field09 character varying(50) NOT NULL, field10 character varying(255) NOT NULL, field11 character varying(15) NOT NULL, field12 character varying(255), field13 date NOT NULL, field14 character varying(255) NOT NULL, CONSTRAINT my_pkey PRIMARY KEY (field01) ) WITH ( OIDS = FALSE ) TABLESPACE mytablespace; ALTER TABLE myschema.mytable OWNER to myuser; GRANT ALL ON TABLE myschema.mytable TO myuser;     The log error that pointed me to a solution was the following: at com.splunk.dbx.command.DbxOutputCommand.main(DbxOutputCommand.java:100) Caused by: java.lang.NullPointerException at java.math.BigDecimal.<init>(BigDecimal.java:809)   By the way no valuable logs were present in Splunk _internal index, usually when some SPL query fail to insert into our PostgreSQL DB I find valuable information like SQL codes and SQL errors. This time it was not present. I hope this post will help someone having the same issue. Best Regards, Edoardo
I've configured via the app instructions and pushed the files I want to be tracked.  Yay.  The app install went well also.  The issue I'm having is that the push from Splunk to the repository is fail... See more...
I've configured via the app instructions and pushed the files I want to be tracked.  Yay.  The app install went well also.  The issue I'm having is that the push from Splunk to the repository is failing with these messages. EXITCODE: 0 repo_size=181149490 COMMAND: git push OUTPUT: fatal: could not read Username for 'https://github.ibm.com ': No such device or address EXITCODE: 128 An exception of type Exception occurred. Arguments: ('Error occured - is authentication to remote site correct? and network path available?',) runtime=0.23 status=1 Is there additional configuration I need to do?
I'm new to splunk, how can I import syslog from my local computer to splunk?  - when i search it says it can be done via universal forwarder. but I want to collect my syslog logs on localhost. -I o... See more...
I'm new to splunk, how can I import syslog from my local computer to splunk?  - when i search it says it can be done via universal forwarder. but I want to collect my syslog logs on localhost. -I opened the 514 udp port and created my settings on splunk. But it doesn't show up in search.
Hi, We have 1000 EC2 instances, how to install forwarders in all instances all at one go? If we use script, from where we need to push the forwarder config to all 1000 instances?  
Hi, With HEC token we see loss in logs. 1. Is there a way to get the logs that were lost? 2. How will we know that there are log loss?  
what is thruput in limits.conf of Universal forwarder? What will it do? What it its location? Throughput and thruput (both are same or different)
Dear all, despite my best efforts, I was not able to find satisfactory information. Thus I would like to ask if anyone here can help me with this. We have the UF running in a docker container in a ... See more...
Dear all, despite my best efforts, I was not able to find satisfactory information. Thus I would like to ask if anyone here can help me with this. We have the UF running in a docker container in a k8s environment. For getting data in, we are using batch/monitor on files stored on a persistant volume claim. Consider the following scenario: - The container the UF is running in gets restarted while the UF is processing a file. After booting back up, the UF re-processes the entire file, leading to duplicates on the indexer Is this something we need to consider, for example by checking that the UF is currently not processing anything before restarting? Or will the UF take care of all of this for us?
I'm a bit lost. Every piece of info that I find on the web (as well as materials from the Splunk's own trainings) say that UF does only very limited input preparation (line breaking, metadata adjustm... See more...
I'm a bit lost. Every piece of info that I find on the web (as well as materials from the Splunk's own trainings) say that UF does only very limited input preparation (line breaking, metadata adjustment, character encoding) but no real parsing work. Thus I'm confused to find in logs that, for example: 12-03-2021 15:50:44.906 +0100 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Fri Dec 3 15:50:36 2021). Context:[...] That would mean that some timestamp parsing does take place. But do I still need to put my timestamp extraction config on the HF? (I use UF -> HF-> idx) How does it correspond to possible settings about breaking on timestamp? If I want to break (which should happen on UF, right?) on timestamp, do I need to provide timestamp format on both UF (for breaking) and HF (for parsing)?
Hello, I would like to ask, if it is possible to pass a time restriction to a subsearch of an join ? Unfortunately I did not find anything fitting in the forum. In my specific case I would like to ... See more...
Hello, I would like to ask, if it is possible to pass a time restriction to a subsearch of an join ? Unfortunately I did not find anything fitting in the forum. In my specific case I would like to enrich the results of search1 with the last event of search2, in which the ID is equal and the timestamp of search2 is not more than 5 minutes before the timestamp of search1.   index="summary_index" search_name="search1" ...|fields _time ID ... |join type=left left=L right=R usetime=true earlier=true where L.ID=R.ID [search index="summary_index" search_name="search2" |fields ...]    Does someone have an idea? Thanks in advance!
Hello everyone, Here's the situation : indexer1, deployment server role indexer 2 fowarder 1.   I distributed via the deployment server a new outputs.conf with : [tcpout]: defaultGroup = inde... See more...
Hello everyone, Here's the situation : indexer1, deployment server role indexer 2 fowarder 1.   I distributed via the deployment server a new outputs.conf with : [tcpout]: defaultGroup = indexer1,indexer2 [tcpout:indexer1] server = xx.xx.xx.xx:9997 [tcpout:indexer2] server = indexer2.com:9997 There is a VS between forwarder1 and indexer2. I activated the DEBUG in log.cfg for TcpOutPutProc The log on forwarder1 tells me only : 12-03-2021 15:08:15.743 +0100 DEBUG TcpOutputProc - channel not registered yet 12-03-2021 15:08:15.743 +0100 DEBUG TcpOutputProc - Connection not available. Waiting for connection ... and 12-03-2021 15:28:27.862 +0100 WARN TcpOutputProc - Cooked connection to ip=ip_vs_indexer2:9997 timed out A tcptraceroute tells [open] between the forwarder and the VS but doesn't show me any more than that. Does this mean I have some network issue ? Do you have any suggestion ?   Thanks Ema      
Example: MyNameisKumar I want name=kumar from this ingested Data . Please help me with the solution 
Hi, I have a very specific problem. I have a field with following values at different timestamps. Example: 1,3,20 0 2,3,43,9,12 3,3,40,8,20,9,80 2,3,20,9,30 6,2,0,3,30,4,42,5,29,6,80,9,92   ... See more...
Hi, I have a very specific problem. I have a field with following values at different timestamps. Example: 1,3,20 0 2,3,43,9,12 3,3,40,8,20,9,80 2,3,20,9,30 6,2,0,3,30,4,42,5,29,6,80,9,92   This field actually represents very specific information, which I need to extract to feed my calculation. The first number shows us how many fields are there to be extracted. The second (and every other even number) is the name of the field to be extracted. The third (and every other odd number) is the value of the field, whose name is stated just before. That means that the last example I stated means that: There are six (6) fields to be extracted The key:value pairs are: 2:0 3:30 4:42 5:29 6:80 9:92 I want to be able to extract these fields, assigning them the approriate name. Is there a command / function that handles this well? Thanks in advance!
Hello I'm trying to injest event from this Microsoft event viewer: [WinEventLog://Microsoft-Windows-TerminalServices-ClientActiveXCore/Microsoft-Windows-TerminalServices-RDPClient/Operational] disa... See more...
Hello I'm trying to injest event from this Microsoft event viewer: [WinEventLog://Microsoft-Windows-TerminalServices-ClientActiveXCore/Microsoft-Windows-TerminalServices-RDPClient/Operational] disabled = 0 renderXml = 1 sourcetype = XmlWinEventLog index = ad   My issue is, that the name of  the event log the whole path is and not just "Operational" like the others.   Because of that I will get an error in Splunk: ERROR ExecProcessor [5076 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventMon::configure: Failed to find Event Log with channel name='Microsoft-Windows-TerminalServices-ClientActiveXCore/Microsoft-Windows-TerminalServices-RDPClient/Operational'   Is there a way to escape the "/" before Operational? Thank you very much in advice.
We are in the process of setting up comprehensive VPN Dashboards,We would like to enable alerting on these dashboards based on machine learning and standard deviation.. Can someone help me achieve th... See more...
We are in the process of setting up comprehensive VPN Dashboards,We would like to enable alerting on these dashboards based on machine learning and standard deviation.. Can someone help me achieve this /.
My database collector is set to use custom JDBC string: jdbc:oracle:thin:@(DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = 10.64.129.132)(PORT = 5350))       (ADDRESS =... See more...
My database collector is set to use custom JDBC string: jdbc:oracle:thin:@(DESCRIPTION =     (ADDRESS_LIST =       (ADDRESS = (PROTOCOL = TCP)(HOST = 10.64.129.132)(PORT = 5350))       (ADDRESS = (PROTOCOL = TCP)(HOST = 10.64.129.133)(PORT = 5350))     )     (CONNECT_DATA =       (SERVICE_NAME = SERVICE_NAME.XYZ.COM)     )   ) So I would expect, it will try to reach database on above IP addresses (db listener) BUT instead of that it tries to connect to oracle server VIPs where we have installed databases and ignoring listeners IPs Caused by: java.io.IOException: Connection timed out, socket connect lapse 127231 ms. server.xyz.com/10.64.50.184 5152 0 1 true                 at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209)                 at oracle.net.nt.ConnOption.connect(ConnOption.java:161)                 at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470)                 ... 29 more                 Caused by: java.io.IOException: Connection timed out, socket connect lapse 127231 ms. server.xyz.com/10.64.50.174 5152 0 1 true                 at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:209)                 at oracle.net.nt.ConnOption.connect(ConnOption.java:161)                 at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:470) Of course, FW allows communication only towards listener IPs and not to servers original IPs What to do? Thanks.
Basically the chart is showing blue & green lines, but user needs more distinguishing color. Like Red & Blue.