All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can we stop using a Licensed Heavy Forwarder and reuse the same license on setting up another Heavy Forwarder ?
Hi All, we are getting below error while adding app account configuration in Microsoft Cloud Services Add On from web UI. REST Error [400]: Bad Request -- Account authentication failed. Please chec... See more...
Hi All, we are getting below error while adding app account configuration in Microsoft Cloud Services Add On from web UI. REST Error [400]: Bad Request -- Account authentication failed. Please check your credentials and try again So to do this configuration as per documentation we did the same from command line and now it seems to be saved without any error. But after configuring inputs for event hub it's giving below error message in sourcetype=mscs:azure:eventhub:log 2021-02-16 13:36:51,248 level=ERROR pid=50816 tid=MainThread logger=__main__ pos=utils.py:wrapper:72 | datainput="Test" start_time=1613479010 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunk_ta_mscs/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_event_hub.py", line 597, in run credential = self._create_credentials(config, proxy) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/mscs_azure_event_hub.py", line 505, in _create_credentials args = parser.parse(content) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunk_ta_mscs/splunksdc/config.py", line 125, in parse stanza[field.key] = field.parse(content) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunk_ta_mscs/splunksdc/config.py", line 156, in parse value = super(IntegerField, self).parse(document) File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunk_ta_mscs/splunksdc/config.py", line 139, in parse raise KeyError("%s not exists" % self._key) KeyError: 'account_class_type not exists'   First, thing I doubt about is add on making request to azure cloud for authentication because I tried providing other credentials which are working fine with other add on (O365) are also giving same error. Second, if app account configuraiton is saved without any error are working or not working (as I can't see error in splunkd logs, it was coming when trying to configure from web UI). Third, what will be the stanza for 'account_class_type' to put in conf file as it's not mentioned in documentation.   Thanks, Bhaskar
SPlunk SPL query to list unique serverclass and Apps present in deployment server.
we would like to set the case_sensitive_match = 0 as default setting , so new lookup will be created not case sensitive  will it work if I set it in transforms.conf under [default] ?
Hi Splunkers, please help. I have search where i want to show percentages by host of how many errors (mentioned below) occured on host comparing with other hosts. This is my search, and i get result... See more...
Hi Splunkers, please help. I have search where i want to show percentages by host of how many errors (mentioned below) occured on host comparing with other hosts. This is my search, and i get results for all hosts 100% index=pkg_dummy host IN (*) "[Error] POS Card Validation - Result: Timeout" | eval host=host | dedup _raw | rex "\[Error\]\sPOS\sCard\sValidation\s\-\sResult:\s(?<timeout>Timeout)" | stats count by host AS "TOTAL" | stats count(eval(timeout)) AS NOK_Transaction by host | eval FailedTr = round((NOK_Transaction / TOTAL *100),2), FailedTr = FailedTr + "%" | table host FailedTr | sort FailedTr desc Thank you
Is any addOn available for Splunk to Integrate in C++ code? can some one share their experience or sample code.   Thanks a lot in Advance...
Hi Everyone I have a some standard Windows log that is not in English, when I get the data in how can I translate it into English. Does Splunk can do this translation? Should it be done in the par... See more...
Hi Everyone I have a some standard Windows log that is not in English, when I get the data in how can I translate it into English. Does Splunk can do this translation? Should it be done in the parsing phase when data getting in? Or should it be done in the searching phasing phase when writing correlation search ? Thank you for your help Cheers  
Hello, I need some help. One of our clients wants to see when the patch version of Splunk is updated. Is this possible ? 
We need to send SNMP traps from Splunk to other system. As per my understanding, these are the steps required: 1. Create custom alert action app 2. Configure alerts from search and add the custom ... See more...
We need to send SNMP traps from Splunk to other system. As per my understanding, these are the steps required: 1. Create custom alert action app 2. Configure alerts from search and add the custom alert action in it.  Is there anything else required to be done? Is there any sample script to send the snmp traps?    Thanks in advance!!
Hi,   I have a dashboard with a multiselect filter for Application Name. The filter is populated from a lookup that contains more than 2000 different application names.   The query to get the ap... See more...
Hi,   I have a dashboard with a multiselect filter for Application Name. The filter is populated from a lookup that contains more than 2000 different application names.   The query to get the application name is very simple and takes less than a second: | inputlookup applications.csv | dedup "App Name" | sort 0 "App Name"   However – once I click on the filter in order to choose a value – it freezes the dashboard for a while until I can select from the filter.   BTW - I need the option to view all applications by default, so it will not help to add another filter before, that will divide the applications to smaller buckets
Hi,   I have a dashboard with a multiselect filter for Application Name. The filter is populated from a lookup that contains more than 2000 different application names.   The query to get the ap... See more...
Hi,   I have a dashboard with a multiselect filter for Application Name. The filter is populated from a lookup that contains more than 2000 different application names.   The query to get the application name is very simple and takes less than a second: | inputlookup applications.csv | dedup "App Name" | sort 0 "App Name"   However – once I click on the filter in order to choose a value – it freezes the dashboard for a while until I can select from the filter.   BTW - I need the option to view all applications by default, so it will not help to add another filter before, that will divide the applications to smaller buckets
Hello all For various reasons, I packaged my developed app as a tar archive and uploaded it in the app management section as a private app. It should then get vetted and either succeed or fail. Curr... See more...
Hello all For various reasons, I packaged my developed app as a tar archive and uploaded it in the app management section as a private app. It should then get vetted and either succeed or fail. Currently, I have the issue that the app(s) are stuck at the vetting process since several days.  Is there any way to cancel the vetting? Kind regards Patrick
I'm trying to extract timestamp exactly from the CSV for each event, but doesnt happen. It show only indexed time in the search head results. Anything I'm doing here wrong ? Props.conf [websense:cg... See more...
I'm trying to extract timestamp exactly from the CSV for each event, but doesnt happen. It show only indexed time in the search head results. Anything I'm doing here wrong ? Props.conf [websense:cg:kv] TIME_PREFIX ="(.*?1)","(.*?)" TIME_FORMAT=[%d/%m/%y %H:%M:%S] TRANSFORMS-eliminate_header = eliminate_header INDEXED_EXTRACTIONS = CSV FIELD_DELIMITER = , TIMESTAMP_FIELDS = Date,Time HEADER_FIELD_LINE_NUMBER = 1 Transforms.conf [eliminate_header] REGEX = "Date"|"Time"|"Action"|"Category Name"|"Localized Country"|"Policy Name" DEST_KEY = queue FORMAT = nullQueue   Sample event: "16/02/2021","07:19:41","Allowed","Collaboration - Office","None","##DEFAULT_Policy","abc@ff.com","eer-ltp-55dd8","live.com","None","None","pptsgs.officeapps.live.com:443/","None","None","34.98.220.117","United States","52.109.124.129","United States","10.212.168.62","None","None","None","None","None","Unknown","Unknown","594","17711","18305.0","Endpoint (Proxy Connect)","Static Classification","None","443","None","Connect"
Hi there, Just a quick question on the cluster map that is not really displaying what we are aiming for... We have a simple query which is then piped to iplocation then geostats as this:     que... See more...
Hi there, Just a quick question on the cluster map that is not really displaying what we are aiming for... We have a simple query which is then piped to iplocation then geostats as this:     query | iplocation myIP | geostats count by Country globallimit=0     We are trying to "regroup" items by Country, as it may seem obvious, but it is more difficult than expected. Here is an example of the output (guess geobin/lat/lon is messing with us): Blue canada is splitted into 2 locations, and USA in at least 4, we would like to have a cluster more or less like the chloromap. If we count by Country before geostats, it shows the expected result (in a table), but if we apply geostats it is splitted by location, any way to circumvent this ? We tried to use another iplocation prefix, and geostats count by myCountry, does not work either  
Hi Members, So I am quite new to splunk and I need to send the splunk search results to AWS S3 bucket. I have tried some apps from splunkbase but they are not working. (APP NO 5273 & Event Push by De... See more...
Hi Members, So I am quite new to splunk and I need to send the splunk search results to AWS S3 bucket. I have tried some apps from splunkbase but they are not working. (APP NO 5273 & Event Push by Deductiv). Can someone guide me here what approach I should follow to make such a pipeline?  (Since we are working on just of POC we cant use the Splunk DSP, I am looking for an open source or free approach with minimal cost) . Thanks
Hi Team, I would like to get certified as Splunk admin. I have certified splunk user and splunk power user for the version 6.x. Now, I would like to get certify as Splunk admin certified. Could you ... See more...
Hi Team, I would like to get certified as Splunk admin. I have certified splunk user and splunk power user for the version 6.x. Now, I would like to get certify as Splunk admin certified. Could you please let me know how to approach to get admin certification and also help me to get training materials   Regards, GJ
Anyone have the directions handy to disable splunk stream on a particular server? Is it done via the splunk stream app? I want to disable it in a way that the service will not start up when the serv... See more...
Anyone have the directions handy to disable splunk stream on a particular server? Is it done via the splunk stream app? I want to disable it in a way that the service will not start up when the server reboots.
Hi, I'd like to store the data collected by the "Splunk Add-on for Microsoft SQL Server" in a metrics index. Initially I installed the TA as as described and it collected data to a normal index with... See more...
Hi, I'd like to store the data collected by the "Splunk Add-on for Microsoft SQL Server" in a metrics index. Initially I installed the TA as as described and it collected data to a normal index with no problems. So I updated it's inputs.conf file to point to the metrics index which failed because the data wasn't structured correctly. So after a little bit of Googling I crafted a props and transforms files to change the counter field to metric_name (mainly based on this answer: Sending-Perfmon-data-to-metrics-index ) but it's still not working. In inputs.conf file looks like so (I'm just showing one measure as an example): [perfmon://sqlserverhost:processor] object = Processor counters = % Processor Time instances = _Total interval = 60 showZeroValue = 1 mode = single disabled = 0 index = em_metrics sourcetype = PerfmonMetrics:sqlserverhost:processor Props.conf [PerfmonMetrics:sqlserverhost:processor] TRANSFORMS-metric = sqlserverhost_metric TRANSFORMS-value = sqlserverhost_value Transforms.conf [sqlserverhost_metric] REGEX = collection=(.+)[\s\S]*counter=(.+)[\s\S]*instance=(.+) FORMAT = metric_name::$1.$3.$2 WRITE_META = true [sqlserverhost_value] REGEX = Value=(.+) FORMAT = metric_value::$1 WRITE_META = true All three of these files are on the Universal Forwarder on the MSSQL host I'd like to monitor. The architecture of the Splunk instance is Univeral Forwarder on the host, to Heavy Forwarder, then via Cribl (which allows me to see the transforms are not working) to the Indexer/Search Head. What am I doing wrong here? Thanks Eddie
I am passing the timevalue and hostname from table 1 in panel1 to another query/panel in the same dashboard  Here is a sample time format : 2021-02-14 19:04:34 I first had the time as column 2 and ... See more...
I am passing the timevalue and hostname from table 1 in panel1 to another query/panel in the same dashboard  Here is a sample time format : 2021-02-14 19:04:34 I first had the time as column 2 and tries all combinations of eval <eval token>="e">strptime($click.value2$,"%Y-%m-%d %H:%M:%S.%3N")</eval> <eval token="l">strptime($row._time$,"%Y-%m-%d %H:%M:%S.%3N")+1200</eval> When I printed $e$ and $l$ said NaN Then I made it column 1 and used this combination of eval  <eval token>="e">$click.value$-600</eval> <eval token>="l">relative_time($click.value$,"+5m")</eval> When I printed $e$ it said 1421 and $l$ said NaN Ideas?   
Hi All i have a below data  Date Orginaldate jobid process_name Messge_text 14-02-2020 T11:30:00 14-02-2020 T11:25:00 a1234 testprocess1 process start 14-02-2020 T11:45:00 14-02-2... See more...
Hi All i have a below data  Date Orginaldate jobid process_name Messge_text 14-02-2020 T11:30:00 14-02-2020 T11:25:00 a1234 testprocess1 process start 14-02-2020 T11:45:00 14-02-2020 T11:35:00 a1236 testprocess2 process start 14-02-2020 T12:00:00 14-02-2020 T11:47:00 a1234 testprocess1 process ends 14-02-2020 T12:15:00 14-02-2020 T11:50:00 a1235 testprocess3 process start 14-02-2020 T12:30:00 14-02-2020 T12:17:00 a1235 testprocess3 process ends 14-02-2020 T12:45:00 14-02-2020 T12:35:00 a1236 testprocess2 process ends 14-02-2020 T13:00:00 14-02-2020 T12:50:00 a1237 testprocess4 process start 14-02-2020 T13:15:00 14-02-2020 T13:05:00 a1237 testprocess4 process ends   i want to group the events jobid  based on original date column and not the _time or the first column  and want the below output  Orginaldate jobid processname start msg end_msg duration 14-02-2020 T11:25:00 a1234 testprocess1 process start process ends 1320 14-02-2020 T11:35:00 a1236 testprocess2 process start process ends 3600 14-02-2020 T11:50:00 a1235 testprocess3 process start process ends 1620 14-02-2020 T12:50:00 a1237 testprocess4 process start process ends 900   i have tried the timestamp into epoch and then grouped the event but im not able separate out the two start and endtime