All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all,   We are currently migrating from Splunk on-premise to the Cloud. One of the apps we heavily use is haversine to calculate the distance between 2 locations.   I haven't been able to find ... See more...
Hi all,   We are currently migrating from Splunk on-premise to the Cloud. One of the apps we heavily use is haversine to calculate the distance between 2 locations.   I haven't been able to find a cloud-ready replacement for this. Just looking for recommendations on an app or solution to be able to do this in the cloud?   Cheers, Ryan
I am having an issue with one of my monitor stanza in inputs.conf. The stanza is as below:  [monitor://E:Speech\Tomcat2232\logs\abc-call-router.log] index = x sourcetype = y blacklist = .(gz|tar|tg... See more...
I am having an issue with one of my monitor stanza in inputs.conf. The stanza is as below:  [monitor://E:Speech\Tomcat2232\logs\abc-call-router.log] index = x sourcetype = y blacklist = .(gz|tar|tgz|zip|bkz|arch|etc|tmp|swp|nfs|swn)$ disabled = 0 So I am expecting the above monitor to only ingest E:\Speech\Tomcat2232\logs\abc-call-router.log but it is also igesting E:\Speech\Tomcat2232\logs\abc-call-router.log.1 and E:\Speech\Tomcat2232\logs\abc-call-router.log.2 which I don't want to happen. Does anyone knows why it is happening.? I have been scratching my head. Any help appreciated. Thanks.
Hey Splunkers! We have a large json event that has a Body Message, and BodyJson Message, a little redundant but this is what was provided. The immediate issue is the BodyJson.Message doesnt auto ex... See more...
Hey Splunkers! We have a large json event that has a Body Message, and BodyJson Message, a little redundant but this is what was provided. The immediate issue is the BodyJson.Message doesnt auto extract the JSON fields and it appears to be due to the doublequote before/after the curly brace in the Message object, and also the backslash escaping the doublequotes in the KV pairs. If I remove them from the upload the data extracts completely, but I havent found a good sedcmd to modify just this section of the event, without breaking the rest of the event. Please Help! "Message": "{\"version\":\"0\",\"id\":\"5d3f\"...]}}" (need to sedcmd) _raw (obfuscated) {"MessageId": "eff1", "ReceiptHandle": "gw6", "MD5OfBody": "41a8a", "Body": "{\n \"Type\" : \"Notification\",\n \"MessageId\" : \"dafe\",\n \"TopicArn\" : \"arn:aws:sns:us-east\",\n \"Message\" : \"{\\\"version\\\":\\\"0\\\"}\",\n \"Timestamp\" : \"2021-01-26T04:30:22.756Z\",\n \"SignatureVersion\" : \"1\",\n \"Signature\" : \"Eqaf90pc\",\n \"SigningCertURL\" : \"https://sns.us-east-1.amazonaws.com\",\n \"UnsubscribeURL\" : \"https://sns.us-east-1.amazonaws.com\"\n}", "Attributes": {"SenderId": "AID", "ApproximateFirstReceiveTimestamp": "1611635422813", "ApproximateReceiveCount": "1", "SentTimestamp": "1611635422812"}, "BodyJson": {"Type": "Notification", "MessageId": "dafe", "TopicArn": "arn:aws:sns", "Message": "{\"version\":\"0\",\"id\":\"5d3f\",\"detail-type\":\"Findings\",\"source\":\"aws\",\"account\":\"54\",\"time\":\"2021-01-26T04:30:22Z\",\"region\":\"us-east-1\",\"resources\":[\"arn:aws\"]}}", "Timestamp": "2021-01-26T04:30:22.756Z", "SignatureVersion": "1", "Signature": "Eqaf90pcXJtL425k7", "SigningCertURL": "https://sns.us-east-1.amazonaws.com", "UnsubscribeURL": "https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east"}} SEDCMD-test1 = s/ "Message": "{/ "Message": {/g SEDCMD-test2 = s/}", "Timestamp/}, "Timestamp/g SEDCMD-test3 = s/\\//g  
We are seeing this vulnerability show up via qualys vuln scanning on both our dev and production splunk instances. I am using the same ssl config for both and have tried solving this multiple ways in... See more...
We are seeing this vulnerability show up via qualys vuln scanning on both our dev and production splunk instances. I am using the same ssl config for both and have tried solving this multiple ways including the first solution proposed here: https://community.splunk.com/t5/Getting-Data-In/I-am-looking-for-clarification-on-SSL-compression-settings-in/m-p/126153 this is what our ssl and http server config in server.conf looks like currently: [sslConfig] sslPassword = $encryptedsslpass$ serverCert = $servercertpath$ caCertFile = $cacertpath$ sendStrictTransportSecurityHeader=true useSSLCompression = false allowSSLCompression = false sslVersions = tls1.2 sslVersionsForClient = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256 [httpServer] replyHeader.X-XSS-Protection= 1; mode=block replyHeader.Content-Security-Policy = script-src 'self'; object-src 'self'   Is there anything I need to add to this config or elsewhere to solve this vulnerability? I do not want to block the scanner from seeing the port as I have seen proposed in some solutions.  
I am new to splunk and trying to determine how to setup an alert when a user in active directory is in two different AD groups. For example if a user is in group A and B alert. Anyone have some direc... See more...
I am new to splunk and trying to determine how to setup an alert when a user in active directory is in two different AD groups. For example if a user is in group A and B alert. Anyone have some direction on how to achieve this?
I'm looking to insert some text at our heavy forwarder into certain sourcetypes that a 3rd party running syslog-ng will see and be able to better identify what the logs are. For example, "IISLog", or... See more...
I'm looking to insert some text at our heavy forwarder into certain sourcetypes that a 3rd party running syslog-ng will see and be able to better identify what the logs are. For example, "IISLog", or "DHCPLog". Does anyone have any experience doing this? Here's a sample log: 2021-01-26 20:12:28 192.168.58.11 POST /PerspectiveServices/SecureService.asmx - 443 - 192.168.49.24 Mozilla/4.0+(compatible;+MSIE+6.0;+SV1;+MS+Web+Services+Client+Protocol+4.0.30319.42000) - 200 0 0 26   This is an IIS log. I'd like to add the text somewhere (anywhere) into these logs. If it works, I'll do the same with some other sourcetypes. So the end result might be: 2021-01-26 20:12:28 192.168.58.11 IISLog POST /PerspectiveServices/SecureService.asmx - 443 - 192.168.49.24 Mozilla/4.0+(compatible;+MSIE+6.0;+SV1;+MS+Web+Services+Client+Protocol+4.0.30319.42000) - 200 0 0 26   We are sending them the logs from a heavy forwarder using syslog output. I was looking into potentially using SEDCMD within a props.conf file, but I'm not too experienced with doing this, and just getting started with RegEx.    
I just ran into an issue where the O365 app created for log collection had it's Secret Key expire. According to http://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting there shoul... See more...
I just ran into an issue where the O365 app created for log collection had it's Secret Key expire. According to http://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting there should be a 401 or 500 error generated. But in fact, what is generated it an unhandled exception (running Add-on for O365 2.0.0, but no reason that it would be fixed in current 2.0.3 version): 2021-01-15 16:26:09,016 level=ERROR pid=12670 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | start_time=1610745965 datainput="Audit_AzureActiveDirectory" | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 102, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/batch.py", line 47, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 125, in discover self.token.auth(session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/token.py", line 56, in auth self._token = self._policy(self._resource, session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/token.py", line 37, in __call_ return self.portal.get_token_by_psk(self._client_id, self._client_secret, resource, session) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 98, in get_token_by_psk raise O365PortalError(response) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 31, in __init_ self._code = data['error']['code'] TypeError: string indices must be integers   I already added the comment to the documentation and they suggested I post it here as well. For now, I have an ugly alert/query in case this happens again: index=_internal sourcetype="splunk:ta:o365:log" "Data input was interrupted by an unhandled exception." "File \"/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py\", line 31, in __init__" "TypeError: string indices must be integers"   I'm also sending a suggestion to MS to add events in the logs for "secret is expiring in X days".    
I am trying to get the average of a time difference by using   | stats avg(time_dur) by type   and since I am using this search   | eval time_dur=tostring(strptime(LastSeen,"%d/%m/%y %H:%M:%S")... See more...
I am trying to get the average of a time difference by using   | stats avg(time_dur) by type   and since I am using this search   | eval time_dur=tostring(strptime(LastSeen,"%d/%m/%y %H:%M:%S")-strptime(FirstSeen,"%d/%m/%y %H:%M:%S"),"duration")   I think the data is coming in a string, and because of that, I am not getting any results. If I use    eval time_dur=(strptime(LastSeen,"%d/%m/%y %H:%M:%S")-strptime(FirstSeen,"%d/%m/%y %H:%M:%S"))   and then do the eval I get the results; however, it comes up in some number. I would like to get the difference in HH:MM.
I would like to do a search using 2 columns in a lookup table where the row is AND'd.  Something like Col1 Col2 A 1 B 2 C 3 D 4   where the search ... See more...
I would like to do a search using 2 columns in a lookup table where the row is AND'd.  Something like Col1 Col2 A 1 B 2 C 3 D 4   where the search would be equivalent to index=myindex (Col1=A AND Col2=1) OR (Col1=B AND Col2=2) OR (Col1=C AND Col2=3) OR (Col1=D AND Col2=4)
 I am trying to write a query that will ignore events in certain indexes (these indexes change over time).  I have a CSV file with a single column that looks like this... Index a b c    NOTE: th... See more...
 I am trying to write a query that will ignore events in certain indexes (these indexes change over time).  I have a CSV file with a single column that looks like this... Index a b c    NOTE: this is a simple example, it really has 25+ indexes       host=* NOT index=[| inputlookup Index.csv | fields Index]       This is my non-working attempt.  The actual query is irrelevant (host=*), the point is that I want to ignore any hits where the index is in the CSV file (index!=a index!=b index!=c).   Any help would be greatly appreciated.  
Good Afternoon We are going to use datadog to monitor Atlas databases.  Can splunk ingest datadog data and allow us to continue to use splunk for alerting as that is our enterprise solution. Apolog... See more...
Good Afternoon We are going to use datadog to monitor Atlas databases.  Can splunk ingest datadog data and allow us to continue to use splunk for alerting as that is our enterprise solution. Apologies if this is not question is not framed correctly Thanks
Hello - I'm overall a novice to Splunk as my focus is more on ServiceNow Admin.   But I'm trying to get a better high level understanding how Splunk is working with our SN environment and Event Mana... See more...
Hello - I'm overall a novice to Splunk as my focus is more on ServiceNow Admin.   But I'm trying to get a better high level understanding how Splunk is working with our SN environment and Event Management to better help support when Splunk/Event Management issues crop up. I haven't had a chance to discuss further with our local support who integrated/setup this last year with a outside vendor's support.  So I thought I'd ask here.  We have Splunk setup (using SN Splunk add-on) to create events  in ServiceNow.   We have a local Splunk account with the proper Splunk role and access to the rest api.  And all seems to work from what I understand in most cases.  I'm just trying to understand what the transaction logs are telling me.    Splunk seems to create a large number of transactions during the day.   Many of them appear to be just looking at / scanning the em_event (note the URL without parameters) while a some others also include parameters (in the url query string. (/api/now/table/em_event?sysparm_exclude_reference_link=true&sysparm_query=sys_created_on......)  What would be causing the splunk rest api transaction where there are no parameters being passed?  Is this normal?   From what I understand, the transactions with parameters would be coming from Splunk where our splunk admin setup such a query.  Just trying to get a clearer picture on this part of the integration.  Thanks   SN Transaction Log
Hi All, I got into a error while setting up Microsoft Azure Add on for Splunk. Everything seems to be correct on configuration stands point but getting below error in internal logs . _Splunk_ error... See more...
Hi All, I got into a error while setting up Microsoft Azure Add on for Splunk. Everything seems to be correct on configuration stands point but getting below error in internal logs . _Splunk_ error collecting event hub data: Token authentication failed: 'utf-8' codec can't decode byte 0xe4 in position 0: invalid continuation byte Any words are much appreciated. Thanks, Bhaskar
i publish metric time in GMT.  the data displays in GMT.  is there a way to convert the GMT to the users local time when looking at results in the dashboard?
Hi, I need help adding a line in my props.conf file that will convert lastupdatedt time from UTC to Mountain time. Example extracted from an event below:  "lastupdatedt":"2021-01-26 11:32:46" Thanks
What kind of syslog server tool is best on Linux to capture the CyberArk logs into Splunk. We are planning to setup syslog on the heavy forwarder and directly monitor the inputs from the syslog locat... See more...
What kind of syslog server tool is best on Linux to capture the CyberArk logs into Splunk. We are planning to setup syslog on the heavy forwarder and directly monitor the inputs from the syslog location on the heavy forwarder.   Is it a best practice/doable to combine both on a same servers we don't have any dedicated syslog server in our environment? 
Hi all, At the moment I am trying to color a chart depending on the recency of an alert. This works great for coloring in certain timeperiods during which an alert was triggered, however, I am tryi... See more...
Hi all, At the moment I am trying to color a chart depending on the recency of an alert. This works great for coloring in certain timeperiods during which an alert was triggered, however, I am trying to color the entire chart for a brief moment of 5 minutes. This way the chart stands out and grabs attention, is there any way to easily color the entire chart, or the background of the chart, for a brief moment? ATM I have a query that copies the count field into a second field and provides different colors in the XML options in the source, like so: #query: | makeresults count=20 | eval alert=(random()%2) | streamstats count | eval _time=_time-(count*60) | eval recent_time=relative_time(now(),"-5M@M") | eval latest_alert_time=if(alert>0,_time,None) | eval chart_color = case(latest_alert_time>recent_time,count) | fields _time count alert chart_color #XML: <option name="charting.fieldColors">{"count":#228B22, "chart_color":#bf1f1f}</option>   This above solution only results in colored sections during the alert time, but not a completely colored chart. current result and desired results: current vs desired result (this example image is made by simply saying chart_color=count, so it is not dynamically responding to recent alerts) A change in background color would also be fine, any suggestions are welcome Roelof -------------------------------------------------- #full XML of example dashboard: <dashboard> <label>splunk_forum_background_color</label> <row> <panel> <title>current result</title> <chart> <search> <query>| makeresults count=20 | eval alert=(random()%2) | streamstats count | eval _time=_time-(count*60) | eval recent_time=relative_time(now(),"-5M@M") | eval latest_alert_time=if(alert&gt;0,_time,None) | eval chart_color = case(latest_alert_time&gt;recent_time,count) | fields _time count chart_color</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">area</option> <option name="charting.fieldColors">{"count":#228B22, "chart_color":#bf1f1f}</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel> <title>desired result</title> <chart> <search> <query>| makeresults count=20 | eval alert=(random()%2) | streamstats count | eval _time=_time-(count*60) | eval recent_time=relative_time(now(),"-5M@M") | eval latest_alert_time=if(alert&gt;0,_time,None) | eval chart_color = count | fields _time count chart_color</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">area</option> <option name="charting.fieldColors">{"count":#228B22, "chart_color":#bf1f1f}</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </dashboard>
Hello, I need a regex to extract the GUID from non-standard UPN results that show up in this format: ex095838d@mydomain.onmicrosoft.com First 2 characters: Will always be "ex". GUID: Is the number... See more...
Hello, I need a regex to extract the GUID from non-standard UPN results that show up in this format: ex095838d@mydomain.onmicrosoft.com First 2 characters: Will always be "ex". GUID: Is the number string varying in length. Last character: Will always be "d". Other UPNs from results in the standard format of username@mydomain.onmicrosoft.com (which don't follow the above format) should remain untouched.  Standard username = All letter string varying in length. Thanks!
Hi, I am trying color code App_State cells based on it state in the below table. App_Name    App_State abc                Running cde                Stopped  fgh                 Running xyz    ... See more...
Hi, I am trying color code App_State cells based on it state in the below table. App_Name    App_State abc                Running cde                Stopped  fgh                 Running xyz                Running mnp              Stopped In the dashboard Source: <dashboard theme="dark" refresh="30"> <label>ABC</label> <row> <panel> <table> <search> <query>index=main host="abcde" | rex field=_raw "(?ms)Label\s+Name\s:\s(?&lt;App_Name&gt;\w+\S+)" | rex field=_raw "(?ms)Sync\sState\s:\s(?&lt;App_State&gt;[\w+\s]+)\sNumber" | table App_Name,App_State</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="App_State"> <colorPalette type="map">{"Running":#53a051,"Stopped":#dc4e41}</colorPalette> </format> </table> </panel> </row> </dashboard> But I am not getting any colors in the cells. Can someone please look into it and help me get the cells with desired colors..?
Hi, How we can extract time from the log event and then index ? As Splunk shows different time stamp on indexer but time stamp in log event are different.   Please help.