Activity Feed
- Karma Re: Why is my file not being indexed? for jplumsdaine22. 06-05-2020 12:49 AM
- Got Karma for Re: "search NOT" not working - not excluding the expected results. 06-05-2020 12:49 AM
- Got Karma for anyone able to provide details on what the capability edit_local_apps in authorize.conf allows?. 06-05-2020 12:49 AM
- Karma Re: Why am I unable to find orphaned searches or alerts? for bmacias84. 06-05-2020 12:48 AM
- Karma Re: How to create application specific user roles? for jkat54. 06-05-2020 12:48 AM
- Karma Re: How to overcome sub search limitation (only 10k records). for romedome. 06-05-2020 12:47 AM
- Karma Re: Allowing one role per app for somesoni2. 06-05-2020 12:47 AM
- Got Karma for Re: ldapsearch not getting all key/properties/fields from AD. 06-05-2020 12:47 AM
- Karma Re: Each File as One Single Splunk Event for gkanapathy. 06-05-2020 12:46 AM
- Karma Re: Each File as One Single Splunk Event for gkanapathy. 06-05-2020 12:46 AM
- Karma Advanced documentation for field extraction/transformation? for AHinMaine. 06-05-2020 12:45 AM
- Karma Re: Advanced documentation for field extraction/transformation? for Lowell. 06-05-2020 12:45 AM
- Posted Re: Advanced documentation for field extraction/transformation? on Splunk Search. 08-20-2018 05:59 PM
- Posted Re: Need help understanding how Transform "access-extractions" works on Splunk Search. 08-20-2018 05:10 PM
- Posted Need help understanding how Transform "access-extractions" works on Splunk Search. 08-20-2018 04:10 PM
- Tagged Need help understanding how Transform "access-extractions" works on Splunk Search. 08-20-2018 04:10 PM
- Tagged Need help understanding how Transform "access-extractions" works on Splunk Search. 08-20-2018 04:10 PM
- Tagged Need help understanding how Transform "access-extractions" works on Splunk Search. 08-20-2018 04:10 PM
- Posted Re: Best practise when working with Clustered Endpoints on Deployment Architecture. 07-30-2018 02:20 PM
- Posted Best practise when working with Clustered Endpoints on Deployment Architecture. 07-24-2018 04:21 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
08-20-2018
05:59 PM
Thanks Lowell,
This is a great response and very helpful to my current issue.
... View more
08-20-2018
05:10 PM
Found another post with the info I needed: https://answers.splunk.com/answers/2683/advanced-documentation-for-field-extraction-transformation.html
... View more
08-20-2018
04:10 PM
Hi to all that read this, Hoping one of you might be able to provide some assistance.
We have an app that is producing logs using Extended Common web format. Right now the source type we are using is linked to the access-extractions transform, but is not giving all the required fields.
I have tried a number of different approaches to get the required values using regex, but due to the nature of the logs, it feels like I might need a large number of regex entries to capture all variations.
After figuring out that we were using the access-extractions transform, I though a better approach would be to edit this to suit - however I'm still pretty new to regex and not really sure what the regex in this transform is actually doing or how it works.
A sample of the logs we are working with:
10.x.x.x www.blah.au - [20/Aug/2018:08:06:19 +1000] "GET /ebs/picmi/picmirepository.nsf/PICMI?OpenForm&t=PI&k=D&r=http%3A%2F%2Fwww.assediomoral.org%2Findex.php%2Fspip.php%3Farticle106 HTTP/1.1" 200 53245 "http://a.bla.es/?u=https%3A%2F%2Fwww.ebs.tga.gov.au%2Febs%2Fpicmi%2Fpicmirepository.nsf%2FPICMI%3FOpenForm%26t%3DPI%26k%3DD%26r%3Dhttp%253A%252F%252Fwww.assediomoral.org%252Findex.php%252Fspip.php%253Farticle106" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.189 Safari/537.36 Vivaldi/1.95.1077.60" 422 "" "d:/Lotus/Domino/data/ebs/picmi/picmirepository.nsf"
10.x.x.x www.blah.au "107831_67744" [20/Aug/2018:08:06:19 +1000] "GET /ebs/lm/lmdrafts.nsf/xAgentUpdateValidationMonitoring.xsp?documentId=7D35903C63DAEB54CA2582C000426C09&dojo.preventCache=1534716380650 HTTP/1.1" 200 78 "https://www.ebs.tga.gov.au/ebs/LM/LMDrafts.nsf/GenApp.xsp?documentId=7d35903c63daeb54ca2582c000426c09&action=editDocument" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/11.1.2 Safari/605.1.15" 31 "_ga=GA1.3.644697231.1517015993;
_gid=GA1.3.1541615874.1534641115; DomAuthSessId=A004127B4D088BDBD4B14B7E1BF0928B; WelcomeDialogLM=1; SessionID=9E1B7E03146C77042992C7B008ABB7DB303BC2AD" "d:/Lotus/Domino/data/ebs/lm/lmdrafts.nsf"
10.x.x.x www.blah.au - [20/Aug/2018:08:06:15 +1000] "GET /ebs/picmi/picmirepository.nsf/PICMI?OpenForm&t=PI&k=D&r=http%3A%2F%2Fwww2.ogs.state.ny.us%2Fhelp%2Furlstatusgo.html%3Furl%3Dhttp%253A%252F%252Fpedagogie.ac-toulouse.fr%252Feco-golfech%252Fspip.php%253Farticle129 HTTP/1.1" 200 53566 "https://www.apemsa.es/web/guest/analisis-de-agua/-/asset_publisher/7OQq/content/dureza?redirect=https%3A%2F%2Fwww.ebs.tga.gov.au%2Febs%2Fpicmi%2Fpicmirepository.nsf%2FPICMI%3FOpenForm%26t%3DPI%26k%3DD%26r%3Dhttp%253A%252F%252Fwww2.ogs.state.ny.us%252Fhelp%252Furlstatusgo.html%253Furl%253Dhttp%25253A%25252F%25252Fpedagogie.ac-toulouse.fr%25252Feco-golfech%25252Fspip.php%25253Farticle129" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.170 Safari/537.36,gzip(gfe)" 282 "" "d:/Lotus/Domino/data/ebs/picmi/picmirepository.nsf"
10.x.x.x www.blah.au - [20/Aug/2018:08:06:15 +1000] "GET /ebs/picmi/picmirepository.nsf/PICMI?OpenForm&t=PI&k=D&r=http%3A%2F%2Fwww2.ogs.state.ny.us%2Fhelp%2Furlstatusgo.html%3Furl%3Dhttp%253A%252F%252Fpedagogie.ac-toulouse.fr%252Feco-golfech%252Fspip.php%253Farticle129 HTTP/1.1" 200 53566 "https://www.apemsa.es/web/guest/analisis-de-agua/-/asset_publisher/7OQq/content/dureza?redirect=https%3A%2F%2Fwww.ebs.tga.gov.au%2Febs%2Fpicmi%2Fpicmirepository.nsf%2FPICMI%3FOpenForm%26t%3DPI%26k%3DD%26r%3Dhttp%253A%252F%252Fwww2.ogs.state.ny.us%252Fhelp%252Furlstatusgo.html%253Furl%253Dhttp%25253A%25252F%25252Fpedagogie.ac-toulouse.fr%25252Feco-golfech%25252Fspip.php%25253Farticle129" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.170 Safari/537.36,gzip(gfe)" 282 "" "d:/Lotus/Domino/data/ebs/picmi/picmirepository.nsf"
The particular fields that we are after are the last 3 which represent the time to process, cookie header and translated URL.
Regex from access-extractions:
^[[nspaces:clientip]]\s++[[nspaces:ident]]\s++[[nspaces:user]]\s++[[sbstring:req_time]]\s++[[access-request]]\s++[[nspaces:status]]\s++[[nspaces:bytes]](?:\s++"(?<referer>[[bc_domain:referer_]]?+[^"]*+)"(?:\s++[[qstring:useragent]](?:\s++[[qstring:cookie]])?+)?+)?[[all:other]]
I'm assuming I need to update the last part of this "[[all:other]]" but have tried running this in GUI search box and in regex101, neither seem to be able to work with it so struggling to understand how to update correctly.
... View more
07-30-2018
02:20 PM
Thanks jplum.
for your suggestion above - when you say a dedicated forwarder, do you mean one separate to the either of the clustered instances?
How do you manage the connection to the moving drive from this instance to get the logs?
... View more
07-24-2018
04:21 PM
I have a situation where I need to collect logs from an application that sits on clustered servers with a drive that moves with the Cluster, logs are stored on this "Active Drive".
I have read some issues where by Splunk doesn't pick up the drive when a node becomes active and this is resolved by ensuring that Splunk services is restarted as part of the fail-over. So I'm OK with this part of the problem.
My current concerns are more around indexing duplicate data.
As each Host has it's own copy of Splunk UF and it's own fishbucket, I'd suspect that when fail-over occurs, the active host will commence indexing from the last point it knows about, but many of these logs will likely have been indexed by the alternate host while it was active.
Just wondering if there is a way around this?
Can Splunk be installed on the "Active Drive" and move with the fail-overs thereby maintaining one copy of fishbucket for the clustered instance?
... View more
07-16-2018
06:13 PM
So......
Seems there was a couple of issues at play here, with a syntax error being the one to stump me for a while.
Thanks jplumsdaine22 for your input - only one to offer anything on the topic. Seems you were 100% on the money.
After modifying the inputs to include crcSalt = , rather than crcSalt = , everything is working as expected.
... View more
07-12-2018
03:18 PM
Thanks jplumsdaine22,
On closer inspection, I actually determined that the contents of 2 of the files was identical so tried adding crcSalt = to my inputs.conf - however it didn't seem to make any difference and still only indexed 2 of the files.
... View more
07-09-2018
06:12 PM
I'm trying to on-board a new application and having issues from the get go.
Application is IBM IIB and outputs logs in to a number of sub directories beneath 1 parent logs directory.
Logs are single events per log file and a mix of XML and json formats.
I have started with the xml versions and one of the sub folders to confirm that my inputs were working correctly.
There are 3 files in the directory and all 3 show as being read, but only 2 made it to the index and show in search.
Trying to understand the cause of the 3rd not making it to the indexer.
Inputs:
[monitor://Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\*.log]
disabled = false
index = hprg_applog_nonprod
sourcetype = hprg:iib:xml
After setting this and seeing the issue - I ran “splunk list inputstatus” on the server where the files reside and I can see the below output regarding the files:
Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\*.log
type = directory
Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\20180410_214256_061062_CustomerDominoConsumer_ActivateDeateClient_SOAPRequest.log
file position = 313
file size = 313
parent = Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\*.log
percent = 100.00
type = finished reading
Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\20180410_233834_470779_CustomerDominoConsumer_ActivateDeateClient_SOAPRequest.log
file position = 313
file size = 313
parent = Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\*.log
percent = 100.00
type = finished reading
Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\20180410_234012_541048_CustomerDominoConsumer_ActivateDeateClient_SOAPRequest.log
file position = 313
file size = 313
parent = Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\*.log
percent = 100.00
type = finished reading
which to me suggests that Splunk has read and process all 3 files, yet when I run the search “index=_internal host=DWxxxxxS31 q:\”
I get the below:
05/07/2018
15:38:09.884
07-05-2018 15:38:09.884 +1000 INFO Metrics - group=per_source_thruput, series="q:\iib_log\customerdominoconsumer\activatedeactivateclient\mqsiarchive\20180410_234012_541048_customerdominoconsumer_activatedeactivateclient_soaprequest.log", kbps=0.009649, eps=0.063136, kb=0.305664, ev=2, avg_age=3682772.500000, max_age=7365545
host = DWxxxxxS31 source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log sourcetype = splunkd
05/07/2018
15:38:09.884
07-05-2018 15:38:09.884 +1000 INFO Metrics - group=per_source_thruput, series="q:\iib_log\customerdominoconsumer\activatedeactivateclient\mqsiarchive\20180410_214256_061062_customerdominoconsumer_activatedeactivateclient_soaprequest.log", kbps=0.009649, eps=0.063136, kb=0.305664, ev=2, avg_age=3687826.500000, max_age=7375653
host = DWxxxxxS31 source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\metrics.log sourcetype = splunkd
05/07/2018
15:37:39.025
07-05-2018 15:37:39.025 +1000 INFO TailingProcessor - Adding watch on path: Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive.
host = DWxxxxxS31 source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log sourcetype = splunkd
05/07/2018
15:37:39.025
07-05-2018 15:37:39.025 +1000 INFO TailingProcessor - Parsing configuration stanza: monitor://Q:\IIB_Log\CustomerDominoConsumer\ActivateDeactivateClient\mqsiarchive\*.log.
host = DWxxxxxS31 source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log sourcetype = splunkd
which to me suggests Splunk has only processed 2 of the 3 files, which also fits with the fact that when I search the sourcetype for the input, I’m only getting 2 results.
Looking for any tips / suggestions that might help me trouble shoot this issue.
... View more
07-02-2018
09:36 PM
We are in a similar situation but need to run the LDAP search on a HF and have the results sent back to the indexers however, when we run the collect command, it seems to just store the stash file locally on the server rather than writing back to the indexers.
Anyway to work around this and force the write back to indexers?
... View more
I'm trying to figure out if it's possible to allow a power user to be able to edit the navigation menus for their own app.
looking through all the capabilities, I haven't been able to identify anything that looks like it would do this specifically.
Found edit_local_apps capability - but can't find any documentation on what it actually allows to know whether this is something I could use.
Not sure if anyone has experimented with this capability and can provide some info about it?
... View more
05-29-2018
10:42 PM
Hi ptang
Running btool gives me the following outputs (only included those relevant):
/opt/splunk/etc/apps/config_SH_webconf/local/web.conf privKeyPath = etc/auth/healthCerts/HealthSearcheadPrivateKey.key
/opt/splunk/etc/apps/config_SH_webconf/local/web.conf serverCert = etc/auth/healthCerts/searchheadcertcombined.pem
/opt/splunk/etc/apps/config_SH_webconf/local/web.conf sslVersions = tls1.2
Path and directory listing below match the above output:
-bash-4.2$ ls -la /opt/splunk/etc/auth/healthCerts/
total 44
drwxr-xr-x. 2 splunk splunk 4096 Apr 24 11:15 .
drwx------. 8 splunk splunk 4096 May 28 12:05 ..
-rw-r--r--. 1 splunk splunk 1704 Apr 24 11:15 HealthSearcheadPrivateKey.key
-rw-r--r--. 1 splunk splunk 6261 Apr 24 11:15 searchheadcertcombined.pem
-rw-r--r--. 1 splunk splunk 2894 Apr 24 11:15 searchheadcert.pem
-rw-r--r--. 1 splunk splunk 631 Apr 24 11:15 splunkCertConfig.conf
-rw-r--r--. 1 splunk splunk 1435 Apr 24 11:15 splunksec.csr
-rw-r--r--. 1 splunk splunk 8843 Apr 24 11:15 splunkweb.pem
MuS, Thanks for the extra info - I agree with your thought on btool so ran your command as well - just to compare:
relevant entries:
privKeyPath=etc/auth/healthCerts/HealthSearcheadPrivateKey.key
serverCert=etc/auth/healthCerts/searchheadcertcombined.pem
sslVersions=tls1.2
From this - I can only assume that things are configured correctly - yet, it's not using this cert.
Any other thoughts on why not?
... View more
05-29-2018
05:36 PM
1 Karma
So... after much stuffing about, I was informed about the 10k return limitation of subsearches.
As our NOT search was returning more then 10K, the overflow was impacting our final results.
But - I found this, but provided the solution to my issues:
https://answers.splunk.com/answers/207150/how-to-overcome-sub-search-limitation-only-10k-rec.html
I just came across this gem via a
co-worker. do:
dedup Order_Number
|search NOT [
| inputlookup Order_Details_Lookup.csv
| stats values(Order_Number) AS Order_Number]
| table Order_Number
That will make the subsearch return a
single row with a multi-value field
containing all of the order numbers
but the individual values will get
passed along correctly into the base
search.
... View more
05-29-2018
04:54 PM
Hi xpac,
Yes - I did check splunkd logs for both warnings and errors - nothing obvious.
Have also tried looking for cert, privatekey and the cert name - nothing comes up suggesting errors.
... View more
05-27-2018
09:30 PM
We have generated an SSL Cert using internal CA server, configured to work for a number of our servers including 3 SHs.
We have created an App that pushes out web.conf file with stanza for the following items:
[settings]
privKeyPath = etc/auth/healthCerts/HealthSearcheadPrivateKey.key
serverCert = etc/auth/healthCerts/searchheadcertcombined.pem
sslVersions = tls1.2
I have confirmed that correct files are available and splunk user has access to the files, I have confirmed in btool that the above settings are in affect, yet on one of our servers, it is still using the default self-signed Cert for some reason.
The above works perfectly on the other 2 SHs, just one that it doesn't.
Have checked /etc/system/local - but there are no entries for web.conf, only in default.
I have restarted the Splunk service on the SH a number of times - but still using the default cert.
Not sure what I'm missing or what else I can check - but appreciate any suggestions people might have.
... View more
01-15-2018
08:44 PM
After Speaking with Splunk support in conjunction with the details outlined here: "https://docs.splunk.com/Documentation/Splunk/7.0.1/Indexer/Moveanindex" steps taken were as follows:
Set Cluster Master into maintenance mode
spot index server 1, apply required updates to index app and splunk-launch.conf to point indexes to correct location
copy index folders from old location to new location
Start index server- confirm start correctly
repeat steps 2-4 on remaining index servers
remove cluster master from maintenance mode
Update index app within master apps
re-deploy updated app
After following these steps I can confirm that all indexes now pointing to the correct spots with no issues.
... View more
01-11-2018
03:13 PM
Messages return after clearing.
I also see them when I run the health check from Monitoring Console
... View more
01-11-2018
02:19 PM
Thanks for the help mayurr98.
I have checked indexes.conf - homePath on all indexes is set to $SPLUNK_DB/IndexName/db/.
As mentioned, I have updated $SPLUNK_DB via splunk-launch.conf on both indexes in my cluster - yet I'm still seeing some indexes that are using the old path.
When running | dbinspect index=yourindex I get the below. seems to indicate that internal indexes are holding the old path where as others are picking up the new path.
_telemetry - /opt/splunkhot/_telemetry/db/db_1515589234
_introspection - /opt/splunkhot/_introspection/db/rb_1515619431
dlm_uberagent_log - /var/lib/db/splunkhot/dlm_uberAgent_log/db/rb_1515619434
dlm_uberagent - /var/lib/db/splunkhot/dlm_uberagent/db/rb_1515575095
Not sure why the internals are holding the old path.
Is there another spot I need to check for config??
... View more
01-10-2018
04:30 PM
HI HiroshiSatoh,
I did see this setting, unfortunately I cannot update it as the current space being used is that shared with the OS and total disk space is down to 5Gb free - hence the attempts to move the indexes.
... View more
01-10-2018
03:28 PM
Splunk Version 6.6.2
I am getting lack of space errors due to poor set-up of our Splunk environment and am trying to resolve, but having issues.
The error I'm currently receiving (there were others, but this seems to be the last one) is below:
Search peer server3 has the following message: Disk Monitor: The index processor has paused data flow. Current free disk space on partition '/' has fallen to 4492MB, below the minimum of 5000MB. Data writes to index path '/opt/splunkhot/_audit/db'cannot safely proceed. Increase free disk space on partition '/' by removing or relocating data.
Steps taken so far:
I have followed the steps outlined here "https://docs.splunk.com/Documentation/Splunk/7.0.1/Indexer/Moveanindex" to move the indexes to a new location.
I have also updated /opt/splunk/etc/splunk-launch.conf with the following: SPLUNK_DB=/var/lib/db/splunkhot
Splunk has been restarted on indexes and cluster master yet I'm still seeing the above error after restarting.
Not sure what else to check / update.
Update:
So.... I have determined the cause of my issue - but now I'm not sure of the best steps to resolve.
Within master-apps, I have a indexes app which is defining a number of apps specifically - eg not using $SPLUNK_DB in the path.
How does one update this without breaking index integrity? Under normal processes, one would shutdown the indexer, relocate the indexes and update the path, restart service and all good.
But when I deploy the app to update - this will restart the services on the index servers automatically - not giving me a chance to copy the indexes.
Can I copy prior to pushing the update? or is there a method of deploying where the services are not restarted automatically?
... View more
- Tags:
- errors
- indexer-path
01-04-2018
04:06 PM
I tested the sAMAccountName on both the Search and the lookup - using eval to add a _ before and after the field value.
In both instances, there were no extra spaces in the value.
I have also just tested with len(sAMAccountName), in both lookup and search, the field is 6 characters so also matches there.
... View more
01-04-2018
03:59 PM
I have tweaked the search slightly just to confirm the matching is working correctly within the search by looking for a single user:
| search sAMAccountName
[ | inputlookup dlm_msadAllAccounts.csv
| table sAMAccountName
| search sAMAccountName=kxxxxm ]
This returns 1 results as expected.
Given that matching is definitely working, and that I have the almost identical search working using the NOT with a filter on the lookup to reduce the compared records - I'm running out of ideas on why this is not working without a filter on the lookup.
... View more
01-04-2018
03:34 PM
Yes - I have tested that to ensure that the values match, including cut and pasting the sAMAccount name from the original Search into a new search on the lookup to confirm there is a match.
Interesting though in double checking this, I have found that some values are actually being filtered out by the NOT. If I remove the NOT, I get 140 results back - compared to 107 with. expected results though are about 5 - so a long way from where I need to be.
... View more
01-04-2018
03:13 PM
Just to add some extra info - and confusion - I'm running another version of this report with a minor tweak - which works perfectly:
Tweak is that I'm filtering disabled users - this reduces the number of results in the lookup to about 6k, rather than 20k with out the filter.
| search NOT
[| inputlookup dlm_msadAllAccounts.csv
| search userAccountControl!=ACCOUNTDISABLE*
| table sAMAccountName]
Are there any restrictions in relation to NOT and the number of records it can process?
... View more
01-04-2018
02:50 PM
Checked permissions - all good there.
Also, lookup created within the same App doing the search.
... View more