All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a new log source from which I am receiving data. The log source has no TA for the vendor (at least for what I am trying to do with it). The logs are CEF format and the logs are received via ... See more...
I have a new log source from which I am receiving data. The log source has no TA for the vendor (at least for what I am trying to do with it). The logs are CEF format and the logs are received via a SYSLOG server and then sent to my indexers where I can see the data. The problem is that for every 1 event in the system that it goes into, there are 3 consecutive events that occur. For example Feb 18 03:43:00 WYPM [2020-02-18T03:43:962684] INFO -- : CEF:0|wypm|Column|2.0|Match|1|begintime= FEB 18 2020 03:43 realtime=FEB 18 2020 03:43 customdate=FEB 18 2020 03:43 customdateheading=Log time report=wypm/reports/1234 cat=external Feb 18 03:43:00 WYPM [2020-02-18T03:43:920517] INFO -- : CEF:0|wypm|Column|2.0|Match|1|begintime= FEB 18 2020 03:43 realtime=FEB 18 2020 03:43 customdate=FEB 18 2020 03:43 customdateheading="Log time" report=wypm/reports/1234 section=bottom Feb 18 03:43:00 WYPM [2020-02-18T03:43:920346] INFO -- : CEF:0|wypm|Column|2.0|Match|1|begintime= FEB 18 2020 03:43 realtime=FEB 18 2020 03:43 customdate=FEB 18 2020 03:43 customdateheading="Log time" report=wypm/reports/1234 section=level I tried writing a small query using a transaction but unfortunately this doesn't get rid of the duplication of the tags. If I run a transaction on the data, the single event looks as follows: Feb 18 03:43:00 WYPM [2020-02-18T03:43:962684] INFO -- : CEF:0|wypm|Column|2.0|Match|1|begintime= FEB 18 2020 03:43 realtime=FEB 18 2020 03:43 customdate=FEB 18 2020 03:43 customdateheading=Log time report=wypm/reports/1234 cat=external Feb 18 03:43:00 WYPM [2020-02-18T03:43:920517] INFO -- : CEF:0|wypm|Column|2.0|Match|1|begintime= FEB 18 2020 03:43 realtime=FEB 18 2020 03:43 customdate=FEB 18 2020 03:43 customdateheading="Log time" report=wypm/reports/1234 section=bottom Feb 18 03:43:00 WYPM [2020-02-18T03:43:920346] INFO -- : CEF:0|wypm|Column|2.0|Match|1|begintime= FEB 18 2020 03:43 realtime=FEB 18 2020 03:43 customdate=FEB 18 2020 03:43 customdateheading="Log time" report=wypm/reports/1234 section=level So dedup with consecutive seemed to be the next best bit. If I write a small query such as follows I can dedup this into a single event such as (where "report" is the common field in these 3 events). index=wypm | dedup report consecutive=true Splunk will then combine this into a single event as follows Feb 18 03:43:00 WYPM [2020-02-18T03:43:962684] INFO -- : CEF:0|wypm|Column|2.0|Match|1|begintime= FEB 18 2020 03:43 realtime=FEB 18 2020 03:43 customdate=FEB 18 2020 03:43 customdateheading=Log time report=wypm/reports/1234 cat=external section=Unknown This achieves what I need it do. I however want to extract the fields as they are post the dedup. Can I do this? I also have people that may search this index so I am half tempted to write a macro that calls "index=wypm | dedup report consecutive=true" (and fixes the time stamp). I don't know whether this would work especially with the extracted fields. I would at least have extracted fields being CIM compliant. This method also looks like a messy workaround to the problem, but I am not sure of the correct path to follow to achieve what I trying to do. My intent is to use a search from the "combined data" to generate an alert.
From my threat intel source, we tried to forward the intelligence source to Splunk ES-> Threat Intelligence The raw intelligence files has all the fields including the "Organisation", however in E... See more...
From my threat intel source, we tried to forward the intelligence source to Splunk ES-> Threat Intelligence The raw intelligence files has all the fields including the "Organisation", however in ES , it only shows a subset of the fields. May I know how can I include the "Organisation" field in ES?
以下のログを取り込むときに推奨のソースタイプを教えていただけますでしょうか。 ◆ログ一覧 ・IIS -> ? ・MS Exchange -> ? ・gmail -> CSV形式? ・Firewall-1 -> chackpointのApps? ・SonicWall -> SonicWallのApps? ・NetScreen/SSG -> SSGのApps? ・F... See more...
以下のログを取り込むときに推奨のソースタイプを教えていただけますでしょうか。 ◆ログ一覧 ・IIS -> ? ・MS Exchange -> ? ・gmail -> CSV形式? ・Firewall-1 -> chackpointのApps? ・SonicWall -> SonicWallのApps? ・NetScreen/SSG -> SSGのApps? ・FortiGate -> FortiのApps? ・Proventia -> ProventiaのApps? ・Oracle -> csv形式? ・PostgreSQL -> csv形式? ・MySQL -> csv形式? ・DB2 -> csv形式? English translation Can you tell us the recommended source type when importing the following logs? ◆ Log list ・ IIS->? ・ MS Exchange->? ・ Gmail-> CSV format? ・ Firewall-1-> Chackpoint Apps? ・ SonicWall-> SonicWall Apps? ・ NetScreen / SSG-> SSG Apps? ・ FortiGate-> Forti Apps? ・ Proventia-> Proventia Apps? ・ Oracle-> csv format? ・ PostgreSQL-> csv format? ・ MySQL-> CSV format? ・ DB2-> CSV format?
Is it possible to change Haproxy add-on to recognize sourcetype other than haproxy:default(tcp(http)? If so, where can I do it in Splunk cloud. Thanks
How to customize the ES Incident Review in a way: 1) Once logged in, users can only see the Incident Review Dashboard but no other collections/views in the layout 2) Different user logged in with... See more...
How to customize the ES Incident Review in a way: 1) Once logged in, users can only see the Incident Review Dashboard but no other collections/views in the layout 2) Different user logged in with a different filtered view 3) Hide the search boxes for some users but not all
I am ingesting JSON data via the HEC on a HeavyForwarder, but when I query the data in SplunkCloud, I have different results depending on which app I am using to query the data. For example, in... See more...
I am ingesting JSON data via the HEC on a HeavyForwarder, but when I query the data in SplunkCloud, I have different results depending on which app I am using to query the data. For example, in the search and reporting app, the json data creates an event with fields "ping.jitter" and "ping.latency". However, when I query using a custom app, the event is not created and the fields "ping.jitter" and "ping.latency" are not created nor are they populated with data. Any ideas why?
Hi, I wonder if you can help. I am installing a machine agent in Linux Server, but installation is not successful. I have the following error. ERROR StatusLogger No Log4j 2 configuration file ... See more...
Hi, I wonder if you can help. I am installing a machine agent in Linux Server, but installation is not successful. I have the following error. ERROR StatusLogger No Log4j 2 configuration file found. Using default configuration (logging only errors to the console), or user programmatically provided configurations. Set system property 'log4j2.debug' to show Log4j 2 internal initialization logging. See https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions on how to configure Log4j 2. Do you mind to provide some guidance to fix this? Thanks.
Hi Folks, We use Splunk enterprise cloud as our central logging and SIEM system. All windows logs as well as logs from network devices are sent to the Splunk Cloud. For log transmission, we use... See more...
Hi Folks, We use Splunk enterprise cloud as our central logging and SIEM system. All windows logs as well as logs from network devices are sent to the Splunk Cloud. For log transmission, we use the following two methods (both methods are documented in the doc attached) For windows and unix machines --> Installing splunk universal forwarder agents For network devices that send raw logs --> Configuring splunk log forwarder on a docker instance The windows logs are parsed correctly on Splunk however for logs from networking devices since they are sent in syslog format are not. I was wondering if there are any apps (for palo alto and cisco) that would retroactively parse the previously ingested data? If we can, what would be the best way of doing so? (re-run the data? can we parse it as we perform searches?) Many thanks for your help in advance.
Hello, From the below query I am trying to remove certain strings from a field "message" or find the a specific string seems does not working, tried including but the result still has values... See more...
Hello, From the below query I am trying to remove certain strings from a field "message" or find the a specific string seems does not working, tried including but the result still has values which has this message At the same time i tried using the command to remove the strings which has in the field message but still does not seem to work index=apps sourcetype="pos-generic:prod" Received request to change status=CONFIRMED OR status=REJECTED partner_account_name="Level Up" | stats count by status, merchantId | xyseries merchantId, status, count | eval result = (REJECTED)/((CONFIRMED+REJECTED))*100 | fillnull value=100 result | eval count = CONFIRMED + REJECTED | where count >= 10 | where result >= 20
Hello, I am trying to have timespan to show results for every 2 mins but it seems to reflect the default of 5 mins earliest=-180m index=apps sourcetype=pos-generic:prod "com.grubh... See more...
Hello, I am trying to have timespan to show results for every 2 mins but it seems to reflect the default of 5 mins earliest=-180m index=apps sourcetype=pos-generic:prod "com.grubhub.pos.generic.orders.service.OrdersService: Received request to change status" partner_account_name="Level Up" | dedup orderId | search status=REJECTED | timechart count by status minspan=2m
This app has been giving us many skipped searches and maintenance issues at Why does the tSessions_Lookup_Update report take a long time to complete? Is there a way to find out if the app has bein... See more...
This app has been giving us many skipped searches and maintenance issues at Why does the tSessions_Lookup_Update report take a long time to complete? Is there a way to find out if the app has being used? is there a dedicated index for this app?
Hi Guys I am working for a new client that wants me to develop a monthly report/dashboard for their business. I am trying to get a picture of their needs by asking them some BUSINESS questions on w... See more...
Hi Guys I am working for a new client that wants me to develop a monthly report/dashboard for their business. I am trying to get a picture of their needs by asking them some BUSINESS questions on what will help make this report a reality. I am technical but I am trying to ask them strictly business questions regarding their request. Here are some of the questions that I have in mind but I still need as many questions as possible: • How many concurrent users will be pulling this report at a single instance? • How many users will need access to this dashboard? • How often is the data expected to change • How often would you like to receive this report? Daily, weekly etc. Please send me as many business related questions as possible Please keep in mind that these are questions to business people who have never heard about Splunk. I will appreciate your help.
F5 Security App (https://splunkbase.splunk.com/app/815/) used to work with Splunk version 7.2 After Upgrading Splunk to version 8, I am not able to access the App. This message appears: "Oops. Lo... See more...
F5 Security App (https://splunkbase.splunk.com/app/815/) used to work with Splunk version 7.2 After Upgrading Splunk to version 8, I am not able to access the App. This message appears: "Oops. Looks like this view is using Advanced XML, which has been removed from Splunk Enterprise....". It seems the F5 App is not supported by Splunkv8. Did anyone face this issue, and was able to solve it from Splunk side? Unfortunately there is no direct contact information on the app website to ask the app developer if a new release will be published, so I am submitting this question here...
02-13-2020 02:52:43.167 +0000 WARN HttpListener - Socket error from XX.xx.xxx.xxx while accessing /services/collector: Connection closed by peer I am getting these errors, not sure why, any one i... See more...
02-13-2020 02:52:43.167 +0000 WARN HttpListener - Socket error from XX.xx.xxx.xxx while accessing /services/collector: Connection closed by peer I am getting these errors, not sure why, any one is aware of this alert? Is there any impact of this error?
Hi, we are testing a 8.* of Splunk version using a docker image on a POC virtual machine to migrate our 7.3.4 dev cluster. We've noticed there is a change in values function in tstats comma... See more...
Hi, we are testing a 8.* of Splunk version using a docker image on a POC virtual machine to migrate our 7.3.4 dev cluster. We've noticed there is a change in values function in tstats command: 7.3.4 version the values function can have no inputs params 8.x version the values() function must have an input param so - for example - for a query like this: | tstats values where index=our_index by fieldA, fieldB | rename fieldA as A, fieldB as B| where like(A,"%some_criteria%") OR like(A,"%some_criteria%") | dedup A | dedup B we have some difficults understanding the equivalent search in a 8.x Splunk. We tried a query like this one: | tstats values(fieldA), values(fieldB) where index=our_index by fieldA, fieldB | rename fieldA as A, fieldB as B| where like(A,"%some_criteria%") OR like(A,"%some_criteria%") | dedup A | dedup B but we don't know if it's the right way because in the output we have two more columns: values(A) values(B) with the same values of columns A and B. Do you have any suggest for this particular case or any docs in order to study these changes? Thanks a lot.
Hi, I'm running the following searches and getting different results for the same time range (All time) when comparing projects. For example: For this search, I'm getting many projects and thei... See more...
Hi, I'm running the following searches and getting different results for the same time range (All time) when comparing projects. For example: For this search, I'm getting many projects and their total unique "Defect ID"s. For the ACTIVATION project, I'm getting 23 results: index="my_index" source="my_csv.csv" | dedup "Defect ID" | stats count by "Project Name" For this search, I'm getting 36 results.: index="my_index" source="my_csv.csv" "Project Name"=ACTIVATION | dedup "Defect ID" | stats count by "Project Name" Why when I'm adding the "Project Name"=ACTIVATION to the search I'm getting MORE results? When adding the | search "Project Name"=ACTIVATION somewhere after the dedup command I'm still getting 23 index="my_index" source="my_csv.csv" "Project Name"=ACTIVATION | dedup "Defect ID" | search "Project Name"=ACTIVATION | stats count by "Project Name"
Hi at all, probably it's an already asked question but I cannot find the correct one: I upgraded Splunk to 8.0.2 on my test site before to execute in production systems. I setted (as documented... See more...
Hi at all, probably it's an already asked question but I cannot find the correct one: I upgraded Splunk to 8.0.2 on my test site before to execute in production systems. I setted (as documented) in $SPLUNK_HOME/etc/system/local/server.conf [general] the python.version = python3 and (after restart) this setting is confirmed by btool. But checking the Python version (with splunk cmd python -V ) I still have Python 2.7.17 . Where could be the error? I missed some step? in documentation there's only to add the python.version = python3 row in server.conf and restart Splunk but I still have the old. Now I'm checking on a Windows system but production systems are Red Hat. Thank you. Ciao. Giuseppe
Hello, We are running Splunk version 7.1.3. We have 2 SHCs connected to our indexers. For one of the SHCs, the SHC members keep flickering between 'Up' and 'Down' status on the 'Indexer Cluster... See more...
Hello, We are running Splunk version 7.1.3. We have 2 SHCs connected to our indexers. For one of the SHCs, the SHC members keep flickering between 'Up' and 'Down' status on the 'Indexer Clustering' page. One of the previous posts suggested to increase 'generation_poll_interval' from 5 to 60 seconds. In our case, for members of both SHCs, 'generation_poll_interval' defaults to 5. The flickering status only happens for members of one SHC, and not the other. Any further inputs on this behavior would be appreciated. Thanks
Hello, I'm trying to make an availability graph based on the below calculation: index="MY_INDEX" host="MY_HOST" NOT "UNWANTED_VHOST" | stats count(eval(status="500" OR status="501" OR status="... See more...
Hello, I'm trying to make an availability graph based on the below calculation: index="MY_INDEX" host="MY_HOST" NOT "UNWANTED_VHOST" | stats count(eval(status="500" OR status="501" OR status="502" OR status="503" OR status="504" OR status="505" OR status="506" OR status="507" OR status="508" OR status="509" OR status="510" OR status="511")) as error count(eval(status="200")) as good | head 100 | eval calc = (100/(good+error))*good | stats sum(calc) as Disponibilité The calculation is Ok but I'm not coming to create a timechart where the evolution of "Disponibilité" is calculated day by day. Do you have any idea of how I can do that ? Regards,
Hi @ All, i create a little app to set "updateCheckerBaseURL" value to "0". Then i copied it to "/opt/splunk/etc/deployment-apps" on Deployment Server and changed folder permissions to the splun... See more...
Hi @ All, i create a little app to set "updateCheckerBaseURL" value to "0". Then i copied it to "/opt/splunk/etc/deployment-apps" on Deployment Server and changed folder permissions to the splunk user. The app where deployed to the clients and "./splunk cmd btool web list settings" displays "updateCheckerBaseURL = 0", but the update notification where still there. If i set "updateCheckerBaseURL = 0" direct to "$SPLUNK_HOME/etc/system/local/web.conf", then the update notification where disabled. Please advise - Markus