Activity Feed
- Posted TLS Cert for Proxy within add-ons on Splunk Enterprise. 12-06-2023 06:52 AM
- Tagged TLS Cert for Proxy within add-ons on Splunk Enterprise. 12-06-2023 06:52 AM
- Tagged TLS Cert for Proxy within add-ons on Splunk Enterprise. 12-06-2023 06:52 AM
- Posted Tags allow list CIM Setup are blank after upgrade on Knowledge Management. 11-08-2023 12:25 AM
- Posted Work out how much data splunk searches per day / month / on average on Splunk Search. 11-02-2023 08:47 AM
- Posted Re: Notification when indexes stop receiving data on Getting Data In. 07-19-2023 10:58 PM
- Karma Re: Notification when indexes stop receiving data for meetmshah. 07-19-2023 10:58 PM
- Posted Notification when indexes stop receiving data on Getting Data In. 07-19-2023 09:49 AM
- Got Karma for Re: Tab Delimiter field extractions not working. 11-18-2022 06:29 AM
- Posted Re: Tab Delimiter field extractions not working on Knowledge Management. 11-18-2022 01:38 AM
- Posted Re: Tab Delimiter field extractions not working on Knowledge Management. 11-18-2022 01:33 AM
- Posted Re: Tab Delimiter field extractions not working on Knowledge Management. 11-18-2022 12:04 AM
- Posted Re: Tab Delimiter field extractions not working on Knowledge Management. 11-17-2022 07:40 AM
- Posted Re: Tab Delimiter field extractions not working on Knowledge Management. 11-17-2022 07:22 AM
- Posted Why are Tab Delimiter field extractions not working? on Knowledge Management. 11-17-2022 02:59 AM
- Posted Cisco Umbrella Log Collection SSL validation failed? on All Apps and Add-ons. 10-14-2022 04:14 AM
- Posted How do I fix AWS Lambda HEC error when parsing? on Getting Data In. 09-21-2022 04:29 AM
- Posted K8 AWS HF Delays / Timeouts on Getting Data In. 11-15-2021 11:21 PM
- Posted HEC Introspection Debug not working on Splunk Enterprise. 10-01-2021 07:57 AM
- Posted Re: parsing_err="No data" JSON Works in Add Data on Getting Data In. 08-16-2021 11:08 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 |
12-06-2023
06:52 AM
We have recently switched over from one proxy to another in our organisation, when trying to put the new proxy details in the relevant add-ons like serviceNOW, cisco umbrella etc the data feeds stop, the Network team inform me that we need to use the CA file that they supply. Does anyone know where this needs to be installed in Splunk? I thought in /etc/auth/ but not sure how we point the config to it.
... View more
Labels
- Labels:
-
configuration
11-08-2023
12:25 AM
I wonder if anyone else has experienced this and can advise? we upgraded from 9.0.3 to 9.1.1 also upgraded ES to 7.2.0 and CIM to 5.2.0 However when we go onto the CIM setup from the enterprise security menu now, the Tags Allow list is empty. on the underlying datamodels.conf the tags_whitelist is still populated under the relevant data model stanzas but not displaying on the gui?
... View more
Labels
- Labels:
-
tag
11-02-2023
08:47 AM
Hello, we are trying to work out how much data our Splunk instances search through on average. so we've written a search that tells us our platform is running 75-80,000 searches a day, this would be only a few manual searches and the rest coming from saved / correlation searches. Is there anywhere in the system or a search we can write that would say for instance these 75,000 searches, searched through a total of 750gb of data... We are researching the possibility of moving to a platform that costs per search, so if we can get these figures we can see how much a like for like replacement would actually cost.
... View more
07-19-2023
10:58 PM
amazing thank you for this! i'll give this a go today
... View more
- Tags:
- amazing
07-19-2023
09:49 AM
Hi, we’ve had a problem recently where data has stopped flowing to an index, and it’s a few days before we find out and then resolve. Does anyone know of a splunk 9.x feature or an add-on that you can use to monitor / alert when data stops for a set amount of time?
... View more
11-18-2022
01:38 AM
1 Karma
Found out what it was, its because the app that i created wasnt shared globally so transforms wasnt visible.
... View more
11-18-2022
01:33 AM
I have found the following errors, so it looks like the transforms isnt being detected in splunk, do i need to make them global or something like that? 11-18-2022 09:30:37.315 +0000 WARN SearchOperator:kv [9210 TcpChannelThread] - Could not find a transform named denodo-vdp-fields
... View more
11-18-2022
12:04 AM
Thanks for your reply, I have tried this and for some reason its not working, I cant work out at all why this isnt working. I've also tried to write regex for the transforms which works fine in regex101 but again doesnt extract.
... View more
11-17-2022
07:40 AM
Thanks, can you see if i've missed anything that may be causing it in that case?
... View more
11-17-2022
07:22 AM
Thanks for the response, when i extract them using the gui and "Extract Fields" tab delimeter option does pick the fields out correctly. I thought that if we put the app across the search head it would extract them at search time, not at index time?
... View more
11-17-2022
02:59 AM
i've followed the documentation and also some examples on here but for some reason I cant seem to get these to extract
here is an example of the log
xxx localhost 9997 8003 test test endRequest 2266 2022-11-17T08:08:06.617 2022-11-17T08:08:06.640 23 0 - OK - - DESC EXTENDED VIEW test_data_imp DESC - Denodo-Scheduler JDBC 127.0.0.1 - - the props are as follows
[denodo-vdp-queries] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true REPORT-denodo-vdp-queries-fields = REPORT-denodo-vdp-queries-fields the transforms are as follows
[REPORT-denodo-vdp-queries-fields] DELIMS = "\t" FIELDS = "server_name","host","port","id","database","username","notification_type","sessionID","start_time","end_time","duration","waiting_time","num_rows","state","completed","cache","query","request_type","elements","user_agent","access_interface","client_ip","transaction_id","web_service_name"
i've pushed the app to the forwarders that sending in the data and its in the right sourcetype, i've also pushed the app across the SH cluster, however none of the fields are extracted, am i missing a step?
... View more
Labels
- Labels:
-
field extraction
10-14-2022
04:14 AM
Can anyone assist with this, I see quiute a few people have successfully got the logs working following this work around -->
https://support.umbrella.com/hc/en-us/articles/360001388406-Configuring-Splunk-with-a-Cisco-managed-S3-Bucket
However we get the following error when trying to run the shell script? fatal error: SSL validation failed for <link> EOF occurred in violation of protocol (_ssl.c:1129)
... View more
Labels
- Labels:
-
configuration
09-21-2022
04:29 AM
I wonder if someone can help, we are getting the following error when trying to send data into Splunk, this previously worked but now we cant seem to get it working at all, I have tried to curl the event manually and it succeeds, which is even stranger. The error message is
token name=xxxx, channel=********* source_IP=******, reply=6, events_processed=0, http_input_body_size=101493, parsing_err="While expecting event object to start: Unexpected character while looking for value: 'E', totalRequestSize=101493"
the event we are trying to send looks like this
{ "time": 1663679182, "host": "test-sandbox", "source": "aws/lambda", "sourcetype": "aws:lambda", "index": "xxx-xxx", "event": { "message": "2022/09/20 14:06:22 node=test-sandbox Starting to move cantabm_testfile-9.zip from c21-metadata-dropzone-sandbox to c21-metadata-dest-sandbox/Metadata/cantabm_testfile-9.zip\n", "account": "11111111111" } }
... View more
Labels
- Labels:
-
HTTP Event Collector
11-15-2021
11:21 PM
Hello, We are wondering if anyone else has experienced issues using a k8 cluster of heavy forwarders, to receive AWS firehose data into a GCP Splunk enterprise setup via HEC. However we are seeing lots of duplicates of the data and also a flip on that, some timeouts meaning the event is sent to the s3 bucket rather than being ingested in Splunk. We thought this was an isolated issue in our setup, so we setup a pre-prod environment with the same setup and the same problem is occurring.
... View more
Labels
- Labels:
-
heavy forwarder
10-01-2021
07:57 AM
Hello, we are trying to diagnose a parsing error from AWS Firehose to Splunk using HEC. The endpoint is configured properly but we are getting "no data" parsing errors. To try and debug this I have switched DEBUG on for httpeventcollector on the heavy forwarder receiving the data. However the introspection log is still only showing INFO. Am i setting debug in the wrong place? or has anyone else overcome this?
... View more
Labels
- Labels:
-
troubleshooting
08-16-2021
11:08 PM
Thanks for getting back to me, i worked it out in the end. As it was being sent through as an event, i had to wrap every KVP in "event":{} and that sorted it out. took quite a bit of work with curl.
... View more
08-15-2021
11:31 PM
Wondered if someone can assist me, we're trying to send some log files from AWS in JSON format, coming over as an event. ive copied the log into a text file, gone ADD DATA and initially it fails but then changing sourcetype to _json it formats it fine. However when trying to send the data in properly, i just get a parsing error, is there an easy way to identify whats causing this? the format is as follows. { "time": "1628855079519", "host": "sgw-3451B77A", "source": "share-114D5B31", "sourcetype": "aws:storagegateway", "sourceAddress": "xx.xx.xx.xx", "accountDomain": "XXX", "accountName": "server_name", "type": "FileSystemAudit", "version": "1.0", "objectType": "File", "bucket": "test-test-test", "objectName": "/random-210813-1230.toSend", "shareName": "test-test-test", "operation": "ReadData", "timestamp": "1333222111111", "gateway": "aaa-XXXXXXA", "status": "Success" }
... View more
Labels
- Labels:
-
JSON
-
source
-
sourcetype
08-10-2021
12:59 AM
Hello, my work have kindly paid for me to study fundamentals 2. Do the 30 days start now or do they start once i click "start course" ?
... View more
04-03-2020
05:21 AM
Im wondering if someone can assist, the KV Store has gone down on our searchheads since deploying a new app yesterday. I have checked for the mongod.lock and also tried a --repair but neither of these seem to work
... View more
- Tags:
- kvstore
Labels
- Labels:
-
kvstore
11-21-2019
03:16 AM
Hello, we are seeing some strange results when trying to map RAS connections to our organisation..
The search i am running "index=cisco_collect_std sourcetype=cisco:asa "New Connection Established" |iplocation Remote_IP" shows that we have several connections from India, Ukraine, Egypt but when we check the IP address it is actually based in the UK.
an example of the data this search is working on is here.
Nov 21 10:58:52 10.174.128.11 Nov 21 2019 10:58:52 CR2PDMZASA02 : %ASA-5-750006: Local:10.xxx.xxx.21:4500 Remote:84.68.89.156:65100 Username:xxxxxx IKEv2 SA UP. Reason: New Connection Established
and the regex for Remote_IP is pulling out 84.68.89.156
We have updated the mmdb and also when we interogate the database using iplocation it returns the correct location.
any advice what could be going on here would be great
... View more
09-25-2019
12:25 AM
hello, we are trying to configure a lastchanceindex to capture events being sent to a non-existing index, however it doesnt seem to be working. I've added to the indexes.conf "lastChanceIndex = test_collect_std" but we still get the error message
Search peer indexer-6 has the following message: Received event for unconfigured/disabled/deleted index=fake_index with source="source::D:\tmp\ExampleLog.log" host="host::MACHINE" sourcetype="sourcetype::fake_sourcetype". So far received events from 1 missing index(es).
So the re-route doesnt seem to be doing what it should, there is very little documentation on this. Has anyone successfully got this to work?
For info we are running 7.3.0
... View more
08-29-2019
09:20 AM
Took a bit of fudging with but got the desired outcome. Thanks very much
... View more
08-29-2019
08:41 AM
Thanks i'll try this now.
... View more
08-29-2019
08:15 AM
ive created a table with monitoring in for our daily checks
However I still need to do an eval to get the Total Duration in Minutes for each service which is (“Test File End” – Test_Start)
In the example below I’ve shown in yellow my attempt to eval this field. It actually works when the fields I am using are not included in the join subsearch. However when I join on the subsearch field the field returns blank
It has been suggested to do this without a join but as its in a seperate index the data comes back blank for the file start and end fields.
index=test| bucket _time span=1d as Day | stats earliest(_time) as TEST_Start latest(_time) as TEST_End by Day
| eval TEST_Start=strftime(TEST_Start,"%H:%M:%S")
| eval TEST_End=strftime(TEST_End,"%H:%M:%S")
| eval Day=strftime(Day,"%d/%m/%Y")
| join Day [search index=test2 State=START Service="Testing" | bucket _time span=1d as Day | stats values(FileTime) as "TEST File Start" by Day | eval Day=strftime(Day,"%d/%m/%Y")]
| join Day [search index=test2 State=END Service="Testing" | bucket _time span=1d as Day | stats values(FileTime) as "Test File
End" by Day | eval Day=strftime(Day,"%d/%m/%Y")]
| eval st = strptime(Test_Start,"%H:%M:%S") | eval et = strptime("Test File End","%H:%M:%S") | eval diff = et - st | eval "TEST_Total" = tostring(diff, "duration")
| fields Day Test_Start Test_End "Test File Start" "Test File End" "TEST_Total"
... View more
- Tags:
- splunk-cloud