All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = Eve... See more...
Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = EventCode="5158" To a single blacklist with multiple eventcodes. We have tried: blacklist3 = EventCode=5145,5156,4658,4690,5158 and blacklist3 = EventCode="5145" OR "5156" OR "4658" OR "4690" OR "5158" And none of these are applying and blocking out the event codes.    Any recommendations on how to get this to work?   
Hi @law175, what's the result running  index=your_index host=192.168.79.1 have you one or two sources? Ciao. Giuseppe
Please share your dashboard code
Yes what I do is do Data Inputs > TCP > New Local TCP > Port 9008 TCP / Source type = Syslog / Method = IP / App Context = search and reporting / Index = default. If I switch, I delete the TCP and a... See more...
Yes what I do is do Data Inputs > TCP > New Local TCP > Port 9008 TCP / Source type = Syslog / Method = IP / App Context = search and reporting / Index = default. If I switch, I delete the TCP and add in a new UDP with the same settings. Right now I have a TCP 9008 and a UDP 9004 running, with the same appliance forwarding logs via Syslog to both TCP and UDP ports. TCP has been working fine for 4 days now (though with lots of dropped events). Appliance sends logs fine from appliance via UDP, but Splunk has stopped searching within 1 hour.  The same log appears as the last shown log. It is a memory output that breaks down over around 50 lines (assuming it is too big). Could there be some error occuring with a sylsog that is too big that is breaking searching? The search I am using is source="udp:9004"
Hi @law175, a very stupid question: you changed protocol from from UDP to TCP, did you changed the input stanza? which search are you using? Ciao. Giuseppe
It seems that this picks upp ALL web access so the exclude list needs to be a lot more complicated as every accessed URI ends up here ".../dashboards", "...report", -"...reports" etc. So this would w... See more...
It seems that this picks upp ALL web access so the exclude list needs to be a lot more complicated as every accessed URI ends up here ".../dashboards", "...report", -"...reports" etc. So this would work but the search will need to be a lot more complicated then before. Any suggestions on how to accomplish a search for dashboards visited  are much appreciated.
I had this search set up:   index=_internal source=*splunkd_ui_access.log /app NOT(user="-" OR uri_path="*/app/*/search")   To be able to audit dashboard usage. After updating to 9.1.1 there were... See more...
I had this search set up:   index=_internal source=*splunkd_ui_access.log /app NOT(user="-" OR uri_path="*/app/*/search")   To be able to audit dashboard usage. After updating to 9.1.1 there were very limited numbers of events matching this search. After a bit of digging it seems that what used to be   "GET /en_US/app/<appname>/<dashboard> HTTP/1.1"   is no longer there and the '/app' URI part no longer points to dashboards. I can find the dashboards accessed instead as   "GET /en-US/splunkd/__raw/servicesNS/<user>/<dashboard>/data/ui/<lots>/<more>   As best as I can see, the information I am interrested in seems to now reside in the "web_access.log" instead, which previously contained a lot more information (like the __raw log now). The events in this log file looks like this: "GET /en-GB/app/<app>/<dashboard> HTTP/1.1"   So I need to modify the original search to exclude launcher and a different pattern for search etc. My question is if this is the correct and optimal approach, to work with the "web_access.log" instead of the now seemingly harder to work with "splunkd_ui_access.log". Or should I be looking at some other source or in some other way?
I'm not aware of any Splunk documentation on the matter.
Hello, We had this error on an output query set-up on Splunk DB Connect. Basically the Splunk query is inserting data into an external database.     2023-11-08 01:58:32.712 +0100 [QuartzSchedul... See more...
Hello, We had this error on an output query set-up on Splunk DB Connect. Basically the Splunk query is inserting data into an external database.     2023-11-08 01:58:32.712 +0100 [QuartzScheduler_Worker-9] ERROR org.easybatch.core.job.BatchJob - Unable to read next record java.lang.RuntimeException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[836463,5] Message: Premature EOF at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:128) at com.splunk.ResultsReader.getNextElement(ResultsReader.java:87) at com.splunk.ResultsReader.getNextEvent(ResultsReader.java:64) at com.splunk.dbx.server.dboutput.recordreader.DbOutputRecordReader.readRecord(DbOutputRecordReader.java:82) at org.easybatch.core.job.BatchJob.readRecord(BatchJob.java:189) at org.easybatch.core.job.BatchJob.readAndProcessBatch(BatchJob.java:171) at org.easybatch.core.job.BatchJob.call(BatchJob.java:101) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[836463,5] Message: Premature EOF at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:599) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(XMLEventReaderImpl.java:83) at com.splunk.ResultsReaderXml.getResultKVPairs(ResultsReaderXml.java:306) at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:124) ... 9 common frames omitted     The issue was related to a query timeout. We have set-up the upsert_id in the Splunk DB Connect output configuration so that Splunk can go in insert_update. Looking into _internal log we understood that Splunk, when using the upsert_id, performs a select query for each record it has to insert and then commits every 1000 records (by default):   2023-11-10 01:22:28.215 +0100 [QuartzScheduler_Worker-12] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=dboutput connection_name=SPLUNK_CONN stanza_name=SPLUNK_OUTPUT state=success sql='SELECT FIELD01,FIELD02,FIELD03 FROM MYSCHEMA.MYTABLE WHERE UPSERT_ID=?'       2023-11-10 01:22:28.258 +0100 [QuartzScheduler_Worker-12] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=dboutput connection_name=SPLUNK_CONN stanza_name=SPLUNK_OUTPUT state=success sql='INSERT INTO MYSCHEMA.MYTABLE (FIELD01,FIELD02,FIELD03) values (?,?,?)'     Upsert_id is very useful to avoid an sql duplicate key error, and whenever you want to recover data in case the output is failing for some reason. You basically re-run the output query and if the record already exists it is replaced in the sql table. But the side effect is that the WHERE condition of the SELECT statement can be very inefficient if the Database table start to be huge. The solution is to create in the output Database table an SQL index on the upsert_id field.   The output run passed from 11 minutes to 11 seconds, avoiding to hit the timeout of the Splunk DB Connect (30 seconds by default, calculated for every commit).   Best Regards, Edoardo
This thread is over a year old with an accepted solution so the better way to get a response is to post a new question. The old version s of Universal Forwarder support index names in inputs.conf ex... See more...
This thread is over a year old with an accepted solution so the better way to get a response is to post a new question. The old version s of Universal Forwarder support index names in inputs.conf exactly the same as newer versions.  The index must exist on the indexers, of course, and you must have access to it.
Yes I know Can we assume the same compression ratio? Or is there any official feedback on that somewhere in the docs?
The deployment server seems to have come up in a bad state with random read access errors for some files. That ment that some folders were just not fetched from the deployment server. Once we got the... See more...
The deployment server seems to have come up in a bad state with random read access errors for some files. That ment that some folders were just not fetched from the deployment server. Once we got the server back to a fully funcitonal state the observed issues were resolved. 
In general, there are no very strict distribution compatibility dependencies - Splunk requires a specific kernel version and should be happy to work with pretty much any decently modern distribution.... See more...
In general, there are no very strict distribution compatibility dependencies - Splunk requires a specific kernel version and should be happy to work with pretty much any decently modern distribution. The issue typically boils down to: 1. Money (paid vs. free as in free beer distros) 2. Support (in-house vs. paid support) 3. Skills of your staff and their preferences 4. Standards in your company  
A re-install made the setup page available. I still don't get how the "launch" link directs to the TA_Linux app, though now we can at least start using the app
1. This part | table hostname sourceIp | dedup hostname You realize that you will lose additional IP addresses on multihomed hosts? 2. Depending on your data (number of results, size of raw events... See more...
1. This part | table hostname sourceIp | dedup hostname You realize that you will lose additional IP addresses on multihomed hosts? 2. Depending on your data (number of results, size of raw events, time of each search execution) there could be different ways to do that. There is a "join" command but its use is generally discouraged. The typical way is to either append two result sets and do stats by the common field(s) or do a search across two sets, classify the fields into one of the sets (possibly rename fields) and then do the stats.    
Licensing is fine. I switch to TCP from UDP input from the SAME source and everything is fine. All the logs are ingested and indexed and are searchable properly. UDP seems to be the issue. I am usi... See more...
Licensing is fine. I switch to TCP from UDP input from the SAME source and everything is fine. All the logs are ingested and indexed and are searchable properly. UDP seems to be the issue. I am using the admin account created during install. Splunk installed on windows server with admin privileges.  It is just a single head. No cluster or separate indexers. 
I am trying to Install event services from Enterprice Console But don't know this error how to handle it  This is Error : Task failed: Starting the Events Service api store node on host: newMachine... See more...
I am trying to Install event services from Enterprice Console But don't know this error how to handle it  This is Error : Task failed: Starting the Events Service api store node on host: newMachineAp as user: root with message: Connection to [<a href="<a href="http://newMachineAp:9080/_ping" target="_blank">http://newMachineAp:9080/_ping</a>" target="_blank"><a href="http://newMachineAp:9080/_ping" target="_blank">http://newMachineAp:9080/_ping</a></a>] failed due to [Failed to connect to newmachineap/192.168.27.211:9080].
How old is your deployment? Because the "internal" Splunk communication on 8089 and KVstore on 8191 has been TLS-enabled for a long time now by default. It's just that if you've not configured it wit... See more...
How old is your deployment? Because the "internal" Splunk communication on 8089 and KVstore on 8191 has been TLS-enabled for a long time now by default. It's just that if you've not configured it with your own certs, it's using the default Splunk certs (which is not the best idea). But the TLS as such is enabled. With inputs/outputs it's a different story - you have to explicitly enable splunktcp-ssl inputs and outputs. And keep in mind that you can't have both TLS and non-TLS inputs if you're using indexer discovery.
I am not sure how we can help you - it is not clear what count_gb and count are, do you just need to multiply them together to get your answer? | eval product=count*count_gb