All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @law175, are you receiving UDP logs or TCP logs? Ciao. Giuseppe
this is a portion of my dashboard xml, due to the 20k character limit.
Hi @sekhar463, please try this regex: | rex field=hostname "(?<host_name>[^\.]+)\." Ciao. Giuseppe
For now just one. All logs are being forwarded to a logging server (VMware vRealize Log Insight). Then I am sending logs via syslog from that appliance to Splunk. All logs should come from that 192.... See more...
For now just one. All logs are being forwarded to a logging server (VMware vRealize Log Insight). Then I am sending logs via syslog from that appliance to Splunk. All logs should come from that 192.168.79.1 on either UDP:9004 or TCP:9008, depending on what I choose.  
<form version="1.1" theme="dark"> <label>Error Overview</label> <description>These charts only show apps having errors in the selected time frame</description> <fieldset submitButton="false"> ... See more...
<form version="1.1" theme="dark"> <label>Error Overview</label> <description>These charts only show apps having errors in the selected time frame</description> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Across Time</title> <chart> <search> <query>(index=ivss OR index=hec_18399_na_prod) NOT "*ivss-test*" NOT (SourceName=Microsoft-Windows-CAPI2) NOT (SourceName=Microsoft-Windows-DistributedCOM) NOT (SourceName="Microsoft WSE 3.0") NOT (SourceName=Microsoft-Windows-GroupPolicy) NOT (SourceName=Microsoft-Windows-Eventlog) NOT (SourceName=Logging) NOT (SourceName=ADFSAuth) NOT (SourceName=Schannel) NOT "*PackageExtractor.exe*" NOT "*w3wp.exe*" NOT "*openssl.exe*" (Type="Error" OR Level="Error") | eval AppName = case( (SourceName="KmsService" AND Message="*Mailer(*"), "Mailer", (SourceName="KmsService" AND Message="*SPackager(*"), "SPackager", (SourceName="KmsService" AND Message="*Hancock(Ver:*"), "Hancock", (SourceName="KmsService" AND Message="*GVMSAuth(Ver:*"), "GVMSAuth", (source="Cloud.SecurePnC"), "Cloud_SecurePnC", (source="ivssspd"), "SecurePackageDelivery", (sourcetype="WinEventLog:System" AND EventCode=5074), "AppPool_Restarts", (source="ivsscs" AND 'Properties.Service'="SecureConnect"), "Cloud_SecureConnect", (source="ivsscs" AND 'Properties.Service'="SecureMessage"), "Cloud_SecureMessage", (source="ivsscs" AND 'Properties.Service'="FPackager"), "Cloud_FPackager", (SourceName="IVSSCS" AND match(_raw, ".*Service = SecureMessage.*")), "SecureMessage", (SourceName="IVSSCS" AND match(_raw, ".*Service = SecureConnect.*")), "SecureConnect", (SourceName="KmsService"), "KmsService", (SourceName="AutoSigner"), "AutoSigner", (SourceName="DebugToken"), "DebugToken", (SourceName="FlashbackCache"), "FlashbackCache", (SourceName="KeyBundler"), "KeyBundler", (SourceName="SecureModuleCore"), "SecureModuleCore", (SourceName="SecureOTACore"), "SecureOTACore", (SourceName="SecurePaaK"), "SecurePaaK", (SourceName="SecurePayloadCore"), "SecurePayloadCore", (SourceName="SecurePnCCore"), "SecurePnCCore", (SourceName="SecureRekey"), "SecureRekey", (SourceName="SecureSigner"), "SecureSigner", (SourceName="SupplierFeed"), "SupplierFeed", (SourceName="TRON"), "TRON", (SourceName="WSLAgent5"), "WSLAgent5", (SourceName="MMU"), "MMU", 1==1, "Other") | timechart usenull=f useother=f limit=0 span=1h count by AppName</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.chart">line</option> <option name="charting.drilldown">all</option> <option name="height">500</option> <option name="refresh.display">progressbar</option> <drilldown target="_blank"> <condition match="$click.value$=&quot;Mailer&quot;"> <set token="app_query">(SourceName="KmsService" AND Message="*Mailer(*")</set> <eval token="start_time">$row._time$</eval> <eval token="end_time">$row._time$ + $row._span$</eval> <link target="_blank">search?q=(index%3Divss%20OR%20index%3Dhec_18399_na_prod)%0ANOT%20%22*ivss-test*%22%0ANOT%20(SourceName%3DMicrosoft-Windows-CAPI2)%0ANOT%20(SourceName%3DMicrosoft-Windows-DistributedCOM)%0ANOT%20(SourceName%3D%22Microsoft%20WSE%203.0%22)%0ANOT%20(SourceName%3DMicrosoft-Windows-GroupPolicy)%0ANOT%20(SourceName%3DMicrosoft-Windows-Eventlog)%0ANOT%20(SourceName%3DLogging)%0ANOT%20(SourceName%3DADFSAuth)%0ANOT%20(SourceName%3DSchannel)%0ANOT%20%22*PackageExtractor.exe*%22%0ANOT%20%22*w3wp.exe*%22%0ANOT%20%22*openssl.exe*%22%0A(Type%3D%22Error%22%20OR%20Level%3D%22Error%22)%0A$app_query$%0A&amp;earliest=$start_time$&amp;latest=$end_time$</link> </condition> <condition match="$click.value$=&quot;SPackager&quot;"> <set token="app_query">(SourceName="KmsService" AND Message="*SPackager(*")</set> <eval token="start_time">$row._time$</eval> <eval token="end_time">$row._time$ + $row._span$</eval> <link target="_blank">search?q=(index%3Divss%20OR%20index%3Dhec_18399_na_prod)%0ANOT%20%22*ivss-test*%22%0ANOT%20(SourceName%3DMicrosoft-Windows-CAPI2)%0ANOT%20(SourceName%3DMicrosoft-Windows-DistributedCOM)%0ANOT%20(SourceName%3D%22Microsoft%20WSE%203.0%22)%0ANOT%20(SourceName%3DMicrosoft-Windows-GroupPolicy)%0ANOT%20(SourceName%3DMicrosoft-Windows-Eventlog)%0ANOT%20(SourceName%3DLogging)%0ANOT%20(SourceName%3DADFSAuth)%0ANOT%20(SourceName%3DSchannel)%0ANOT%20%22*PackageExtractor.exe*%22%0ANOT%20%22*w3wp.exe*%22%0ANOT%20%22*openssl.exe*%22%0A(Type%3D%22Error%22%20OR%20Level%3D%22Error%22)%0A$app_query$%0A&amp;earliest=$start_time$&amp;latest=$end_time$</link> </condition> <condition match="$click.value$=&quot;Hancock&quot;"> <set token="app_query">(SourceName="KmsService" AND Message="*Hancock(Ver:*")</set> <eval token="start_time">$row._time$</eval> <eval token="end_time">$row._time$ + $row._span$</eval> <link target="_blank">search?q=(index%3Divss%20OR%20index%3Dhec_18399_na_prod)%0ANOT%20%22*ivss-test*%22%0ANOT%20(SourceName%3DMicrosoft-Windows-CAPI2)%0ANOT%20(SourceName%3DMicrosoft-Windows-DistributedCOM)%0ANOT%20(SourceName%3D%22Microsoft%20WSE%203.0%22)%0ANOT%20(SourceName%3DMicrosoft-Windows-GroupPolicy)%0ANOT%20(SourceName%3DMicrosoft-Windows-Eventlog)%0ANOT%20(SourceName%3DLogging)%0ANOT%20(SourceName%3DADFSAuth)%0ANOT%20(SourceName%3DSchannel)%0ANOT%20%22*PackageExtractor.exe*%22%0ANOT%20%22*w3wp.exe*%22%0ANOT%20%22*openssl.exe*%22%0A(Type%3D%22Error%22%20OR%20Level%3D%22Error%22)%0A$app_query$%0A&amp;earliest=$start_time$&amp;latest=$end_time$</link> </condition> </drilldown> </chart> </panel> </row> </form>
Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = Eve... See more...
Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = EventCode="5158" To a single blacklist with multiple eventcodes. We have tried: blacklist3 = EventCode=5145,5156,4658,4690,5158 and blacklist3 = EventCode="5145" OR "5156" OR "4658" OR "4690" OR "5158" And none of these are applying and blocking out the event codes.    Any recommendations on how to get this to work?   
Hi @law175, what's the result running  index=your_index host=192.168.79.1 have you one or two sources? Ciao. Giuseppe
Please share your dashboard code
Yes what I do is do Data Inputs > TCP > New Local TCP > Port 9008 TCP / Source type = Syslog / Method = IP / App Context = search and reporting / Index = default. If I switch, I delete the TCP and a... See more...
Yes what I do is do Data Inputs > TCP > New Local TCP > Port 9008 TCP / Source type = Syslog / Method = IP / App Context = search and reporting / Index = default. If I switch, I delete the TCP and add in a new UDP with the same settings. Right now I have a TCP 9008 and a UDP 9004 running, with the same appliance forwarding logs via Syslog to both TCP and UDP ports. TCP has been working fine for 4 days now (though with lots of dropped events). Appliance sends logs fine from appliance via UDP, but Splunk has stopped searching within 1 hour.  The same log appears as the last shown log. It is a memory output that breaks down over around 50 lines (assuming it is too big). Could there be some error occuring with a sylsog that is too big that is breaking searching? The search I am using is source="udp:9004"
Hi @law175, a very stupid question: you changed protocol from from UDP to TCP, did you changed the input stanza? which search are you using? Ciao. Giuseppe
It seems that this picks upp ALL web access so the exclude list needs to be a lot more complicated as every accessed URI ends up here ".../dashboards", "...report", -"...reports" etc. So this would w... See more...
It seems that this picks upp ALL web access so the exclude list needs to be a lot more complicated as every accessed URI ends up here ".../dashboards", "...report", -"...reports" etc. So this would work but the search will need to be a lot more complicated then before. Any suggestions on how to accomplish a search for dashboards visited  are much appreciated.
I had this search set up:   index=_internal source=*splunkd_ui_access.log /app NOT(user="-" OR uri_path="*/app/*/search")   To be able to audit dashboard usage. After updating to 9.1.1 there were... See more...
I had this search set up:   index=_internal source=*splunkd_ui_access.log /app NOT(user="-" OR uri_path="*/app/*/search")   To be able to audit dashboard usage. After updating to 9.1.1 there were very limited numbers of events matching this search. After a bit of digging it seems that what used to be   "GET /en_US/app/<appname>/<dashboard> HTTP/1.1"   is no longer there and the '/app' URI part no longer points to dashboards. I can find the dashboards accessed instead as   "GET /en-US/splunkd/__raw/servicesNS/<user>/<dashboard>/data/ui/<lots>/<more>   As best as I can see, the information I am interrested in seems to now reside in the "web_access.log" instead, which previously contained a lot more information (like the __raw log now). The events in this log file looks like this: "GET /en-GB/app/<app>/<dashboard> HTTP/1.1"   So I need to modify the original search to exclude launcher and a different pattern for search etc. My question is if this is the correct and optimal approach, to work with the "web_access.log" instead of the now seemingly harder to work with "splunkd_ui_access.log". Or should I be looking at some other source or in some other way?
I'm not aware of any Splunk documentation on the matter.
Hello, We had this error on an output query set-up on Splunk DB Connect. Basically the Splunk query is inserting data into an external database.     2023-11-08 01:58:32.712 +0100 [QuartzSchedul... See more...
Hello, We had this error on an output query set-up on Splunk DB Connect. Basically the Splunk query is inserting data into an external database.     2023-11-08 01:58:32.712 +0100 [QuartzScheduler_Worker-9] ERROR org.easybatch.core.job.BatchJob - Unable to read next record java.lang.RuntimeException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[836463,5] Message: Premature EOF at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:128) at com.splunk.ResultsReader.getNextElement(ResultsReader.java:87) at com.splunk.ResultsReader.getNextEvent(ResultsReader.java:64) at com.splunk.dbx.server.dboutput.recordreader.DbOutputRecordReader.readRecord(DbOutputRecordReader.java:82) at org.easybatch.core.job.BatchJob.readRecord(BatchJob.java:189) at org.easybatch.core.job.BatchJob.readAndProcessBatch(BatchJob.java:171) at org.easybatch.core.job.BatchJob.call(BatchJob.java:101) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[836463,5] Message: Premature EOF at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:599) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(XMLEventReaderImpl.java:83) at com.splunk.ResultsReaderXml.getResultKVPairs(ResultsReaderXml.java:306) at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:124) ... 9 common frames omitted     The issue was related to a query timeout. We have set-up the upsert_id in the Splunk DB Connect output configuration so that Splunk can go in insert_update. Looking into _internal log we understood that Splunk, when using the upsert_id, performs a select query for each record it has to insert and then commits every 1000 records (by default):   2023-11-10 01:22:28.215 +0100 [QuartzScheduler_Worker-12] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=dboutput connection_name=SPLUNK_CONN stanza_name=SPLUNK_OUTPUT state=success sql='SELECT FIELD01,FIELD02,FIELD03 FROM MYSCHEMA.MYTABLE WHERE UPSERT_ID=?'       2023-11-10 01:22:28.258 +0100 [QuartzScheduler_Worker-12] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=dboutput connection_name=SPLUNK_CONN stanza_name=SPLUNK_OUTPUT state=success sql='INSERT INTO MYSCHEMA.MYTABLE (FIELD01,FIELD02,FIELD03) values (?,?,?)'     Upsert_id is very useful to avoid an sql duplicate key error, and whenever you want to recover data in case the output is failing for some reason. You basically re-run the output query and if the record already exists it is replaced in the sql table. But the side effect is that the WHERE condition of the SELECT statement can be very inefficient if the Database table start to be huge. The solution is to create in the output Database table an SQL index on the upsert_id field.   The output run passed from 11 minutes to 11 seconds, avoiding to hit the timeout of the Splunk DB Connect (30 seconds by default, calculated for every commit).   Best Regards, Edoardo
This thread is over a year old with an accepted solution so the better way to get a response is to post a new question. The old version s of Universal Forwarder support index names in inputs.conf ex... See more...
This thread is over a year old with an accepted solution so the better way to get a response is to post a new question. The old version s of Universal Forwarder support index names in inputs.conf exactly the same as newer versions.  The index must exist on the indexers, of course, and you must have access to it.
Yes I know Can we assume the same compression ratio? Or is there any official feedback on that somewhere in the docs?
The deployment server seems to have come up in a bad state with random read access errors for some files. That ment that some folders were just not fetched from the deployment server. Once we got the... See more...
The deployment server seems to have come up in a bad state with random read access errors for some files. That ment that some folders were just not fetched from the deployment server. Once we got the server back to a fully funcitonal state the observed issues were resolved. 
In general, there are no very strict distribution compatibility dependencies - Splunk requires a specific kernel version and should be happy to work with pretty much any decently modern distribution.... See more...
In general, there are no very strict distribution compatibility dependencies - Splunk requires a specific kernel version and should be happy to work with pretty much any decently modern distribution. The issue typically boils down to: 1. Money (paid vs. free as in free beer distros) 2. Support (in-house vs. paid support) 3. Skills of your staff and their preferences 4. Standards in your company  
A re-install made the setup page available. I still don't get how the "launch" link directs to the TA_Linux app, though now we can at least start using the app