Hi, We are using custom java program to index data to splunk using Java SDK (1.6.5.0). We have huge volume to data to index on daily bases around aprox.100000 documents. We used TCP method but it...
See more...
Hi, We are using custom java program to index data to splunk using Java SDK (1.6.5.0). We have huge volume to data to index on daily bases around aprox.100000 documents. We used TCP method but it requires a port to open and bind with a specific source to index data. We receive data as bulk (several records) and process it and index each record seperately. to index data as unique event we need to connect TCP port each time and close it to index data. This way it is working very slow (might be due to connecting and closing TCP port for each record). We tried the com.splunk.Index API (submit()/ attach()), but index rate is very slow. We have java parsers that are doing some business specific stuff with data, so dont want to go for forwards to rewrite the data parsing logic. Can anyone help here and suggest the best practice/approach to index, using java SDK (considering high data velicity). Thanks.
Hello All, I have a situation in which I need to use local lookup file as input in another search, however, the secondary search will happen on an external database, using DB_connect. So the questi...
See more...
Hello All, I have a situation in which I need to use local lookup file as input in another search, however, the secondary search will happen on an external database, using DB_connect. So the question is, how would I read-in the inputlookup file as input (WHERE clause) into a SQL query vs a SPL search? I have done the opposite, in the past... use a lookup file to compare against the results of a SQL query. If it has any bearing on the answer, the lookup file will be a CSV with multiple values for a single field. Thank you.
Hi, I have 2 events which are not containing same fields. Event A : { [-] account_id: 1234 description: Test id: efgh } Event B : { [-] account_id: 5678 description: Dev id: ab...
See more...
Hi, I have 2 events which are not containing same fields. Event A : { [-] account_id: 1234 description: Test id: efgh } Event B : { [-] account_id: 5678 description: Dev id: abcd name: [ [-] { [-] group_id: efgh } { [-] group_id: ijkl } { [-] group_id: mnop } I have a table like this : id group_id abcd efgh ijkl mnop efgh I want to check if the id "efgh" is a value of "group_id" field. I tried this : | eval result=if(like(id,"%".group_id."%"),"OK","Not OK") But it gave me "Not OK" as result. Can you help me please ? Thanks !
I have already installed the Mcafee ePO add-on in Splunk. I am asking about the how data should be ingested into Splunk please? How should the McAfee Threat Data be ingested into Splunk. Via syslog p...
See more...
I have already installed the Mcafee ePO add-on in Splunk. I am asking about the how data should be ingested into Splunk please? How should the McAfee Threat Data be ingested into Splunk. Via syslog push or pull by the Mcafee ePO add-on? Is the ePO is all that is needed? My Splunk ver is 8.0 .
Hi, I have build a report to extract several fields. Summary Indexing by default send result into the "stash" sourcetype. This is how splunk knows that data is already in splunk and my summary dat...
See more...
Hi, I have build a report to extract several fields. Summary Indexing by default send result into the "stash" sourcetype. This is how splunk knows that data is already in splunk and my summary data will not be account for additional license. But I want to change the time parameter in stash sourcetype. I have a field "timestamp" in my log in epoch time : I would like to index my logs with the "timestamp" field, but it does not work. I create a props.conf in my app : [stash] TIME_PREFIX = timestamp\=\" TIME_FORMAT = %s I also tried TIMESTAMP_FIELDS = timestamp but no more result. Example of timestamp value : How can I give timestamp value in _time field ? Can you help me please ?
Hello,
We installed the Splunk app for VMware in our environment. we followed the documentation and installed the required add-on for VMware that contains all the TA's.
We placed the TA's accordi...
See more...
Hello,
We installed the Splunk app for VMware in our environment. we followed the documentation and installed the required add-on for VMware that contains all the TA's.
We placed the TA's according to the documentation, and configured the app as it said.
Now, when we try to look at the app and use it part of the panels are empty. We checked the "app install health collected source types" and according to it we receive all the needed information, for example at the Home tab the Datastore Information is empty.
We tried to inspect and looked at the SPL code and run it from the search, we noticed some of the commands are unknown and we couldn't find any details about them, for example those commands appeared:
- ifields
- gaugetable
We don't know where the problem is, the data dose seem to flow since we do receive a big part of it.
Is anyone experienced this issue? Any help would be great!
Thanks,
Omershira
Hello All, I've build a dashboard with 5 time charts all using the same base search. It starts loading normally, but then halfway "hangs" and shows the error "An error occurred while fetching data"...
See more...
Hello All, I've build a dashboard with 5 time charts all using the same base search. It starts loading normally, but then halfway "hangs" and shows the error "An error occurred while fetching data". The search log shows no errors and reports a successful completion. When the event count is less everything works fine. But the problem is when we use the more events over time(like last 6 months). approximate events counts last 24hrs: 8K last 7 days: 60K last 30 days :300K The Error is occurred in SH cluster only and not occurred in stand alone device(also please suggest the reason behind this). I have the similar issue as Click Here for similar question
AZImaging/Projects/IMG2012002/WSI/D419BC00001/E7004004/SM/96b819b9-fc86-b81b-a999-55a72df0e05a.svs Hi , Above is the string which i want to extract 2 fields . IMG2012002 and D419BC00001. First val...
See more...
AZImaging/Projects/IMG2012002/WSI/D419BC00001/E7004004/SM/96b819b9-fc86-b81b-a999-55a72df0e05a.svs Hi , Above is the string which i want to extract 2 fields . IMG2012002 and D419BC00001. First value after 2 slashes and second value after 4 slashes . How can i write a regular expression for that ? Please help
Hi, I'm having the following error on my search heads-- ERROR initializing ssl context : check splunkd.log regarding configuration errors for server <indexer ip> Any help would be very much appr...
See more...
Hi, I'm having the following error on my search heads-- ERROR initializing ssl context : check splunkd.log regarding configuration errors for server <indexer ip> Any help would be very much appreciated. @isoutamo @aakwah
Hi, I new at Splunk and need some help. I have two sites, primary with deployment server, heavy forwarder, and DR with deployment server and cluster master. I'm trying to write a script that check ...
See more...
Hi, I new at Splunk and need some help. I have two sites, primary with deployment server, heavy forwarder, and DR with deployment server and cluster master. I'm trying to write a script that check if the primary site is up. My line of thought is something like - Primary - if it up, I have TAG "primary". DR - As long as there is a TAG called "primary", I am "standby". If not, then now I have the TAG called "primary" and at cluster master server enable the service.
Hello Team, I have a list of search names saved in csv format and resides in splunk as look up file(222 saved search names). I want to see number of times that saved search triggered alert in a day...
See more...
Hello Team, I have a list of search names saved in csv format and resides in splunk as look up file(222 saved search names). I want to see number of times that saved search triggered alert in a day for 1 week. the search query I am using for the same is as follows "index=_internal sourcetype=scheduler alert_actions="*email*" status=success savedsearch_name=* " |timechart span=1d count by savedsearch_name instead of * in the above query for the filed savedsearch_name I want to use the saved search name from lookup table (csv file) and get the result for each saved search present there. could you please let me know how can I do that ?
Hello Dear Splunkers Recently we got reported issue when our indexers are reporting multiple issues with timestamp extraction. I found out that one log file (with dedicated sourcetype) has mult...
See more...
Hello Dear Splunkers Recently we got reported issue when our indexers are reporting multiple issues with timestamp extraction. I found out that one log file (with dedicated sourcetype) has multiple different formats of time. Therefore we are not able determine correct configuration for sourcetype iin props.conf Beginning of those events look like this: 1. 2021-03-22T16:15:31.995+0800 2. [DEBUG] 2021-03-22 16:15:32.075 3. <2021-03-22T02:15:29.217 CST> Its input from *.stdout log file therefore there is a lot of different types of events. I would like to skip letter "T" in time format definition but sadly I did not find anything that can help us. Anybody dealt with situation like this? Many thanks Denis The biggest problem seems to be in letter "T".
On my Linux server the universal forwarder and Splunk_TA_nix are installed, at least df and cpu are enabled in inputs.conf. vi /opt/splunkforwarder/etc/apps/Splunk_TA_nix/local/inputs.conf [script:...
See more...
On my Linux server the universal forwarder and Splunk_TA_nix are installed, at least df and cpu are enabled in inputs.conf. vi /opt/splunkforwarder/etc/apps/Splunk_TA_nix/local/inputs.conf [script://./bin/df.sh] interval = 300 sourcetype = df source = df index = os disabled = 0 [script://./bin/cpu.sh] sourcetype = cpu source = cpu #interval = 30 interval = 300 index = os disabled = 0 When I search for this Linux server on Splunk, I get df logs. But cpu logs are missing Top 10 Values Count % df 44 1.224% Could anyone advise? much appreciated.
I am planning to upgrade Splunk 7.2.6 to 8.1 and before that, i am planning a Python upgrade from 2.7 to 3. So when I ran the Splunk platform upgrade readiness app I got one private app called "Forwa...
See more...
I am planning to upgrade Splunk 7.2.6 to 8.1 and before that, i am planning a Python upgrade from 2.7 to 3. So when I ran the Splunk platform upgrade readiness app I got one private app called "Forward to Indexer" in our environment getting blocker and which shows Advance XML - Remove all usage of advance XML File path - \Local/data/ui/view\sample_search.xml \local/data/ui/view\sample_notimeline.xml As i am not understanding How to deal with this XML, even I am not able to understand where is this app getting used and what is impact if that XML gets removed.
Hi Team, I have recently installed the PingAccess App for Splunk & PingFederate App for Splunk in our Search head but I couldn't able to see any information in the Dashboards listed. When I navigat...
See more...
Hi Team, I have recently installed the PingAccess App for Splunk & PingFederate App for Splunk in our Search head but I couldn't able to see any information in the Dashboards listed. When I navigate to the Dashboards I can able to see a message as " Search is waiting for input..." and in all other dashboard the same message so do we have anything like Inputs which needs to be configured from our end. And if yes, Where should we need to configure it since there is no documentation for the configuration also I couldn't able to find anything in Data inputs as well. So kindly help on my query.
Hi Team, I encountered few errors while upgrading my Deployment server (7.2) version to 8.1.2 version. Refer below for the steps which I have followed: Initially I have downloaded the Splunk Ente...
See more...
Hi Team, I encountered few errors while upgrading my Deployment server (7.2) version to 8.1.2 version. Refer below for the steps which I have followed: Initially I have downloaded the Splunk Enterprise package latest version from Splunk portal and placed the .tgz file in /tmp directory. Then I have stopped the splunkd service in the Deployment Master server and then I have ran the gunzip command so that the .tgz file in/tmp directory will be converted to (splunk-8.1.2-545206cc9f70-Linux-x86_64.tar) file. Post which I have used the following command to untar the files in /opt/ location. And the files also got extracted to the desired location. tar -xvf splunk-8.1.2-545206cc9f70-Linux-x86_64.tar -C /opt/ Once the files has been extracted I have navigated to the bin directory and tried to accept the license and tried to start the service but I couldn’t able to do it. So I have tried multiple times post which I have reverted the backup which I had and brought back to the old configuration. Initially I have stopped the services before upgrade. So is it correct or wrong? And also kindly let me know how to overcome the other issues as well. I have attached the error snapshots for reference. So kindly guide me how to fix it and successfully upgrade my DM server to latest version.
I have configured the Indexer cluster (Heavy Forwarder, 3 Indexers, Master Node and Search Head) My problem is when I restart my Indexers below Image will appear.so can't access the Splunk web page ...
See more...
I have configured the Indexer cluster (Heavy Forwarder, 3 Indexers, Master Node and Search Head) My problem is when I restart my Indexers below Image will appear.so can't access the Splunk web page can please help me?
Hi all. i have received the below alert from Splunk can anyone support identifying the reason or why may I receive that allert. "Splunk Alert: 00011-Authentication fail for BGP",