All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Could if anyone pls share the dashboard spl for the lateral movement in this YouTube video. https://youtu.be/bCCf9q2B4BM?si=P7FoduAwS--Hkgbw Thanks 
Hi Team, I am a volunteer working in a non profit organization. Is there any splunk pricing available for non profit organizations. Thank you.
I need to extract timestamp from a JSON log where date and time are on two separate fields. Example below:    { "Date": 240315, "EMVFallback": false, "FunctionCode": 80, "Time": 154915 }   Date ... See more...
I need to extract timestamp from a JSON log where date and time are on two separate fields. Example below:    { "Date": 240315, "EMVFallback": false, "FunctionCode": 80, "Time": 154915 }   Date here is equivalent of 2024-March-15 and the time is 15:49:15 pm. I am struggling to find a way to extract timestamp using props.conf. May you please assist. 
I'm trying to create what is effectively a "server" dropdown in a dashboard, where I want to allow people to filter on one or more servers from a lookup.  By default, I want the visualization to show... See more...
I'm trying to create what is effectively a "server" dropdown in a dashboard, where I want to allow people to filter on one or more servers from a lookup.  By default, I want the visualization to show for all servers.  I have the lookup pulling values, but I'm stuck trying to figure out how to make it so that they don't have to unselect a default "*" value.  Ideally, the input is empty by default (or it can show some value like "*" or "all") but once they start selecting individual servers that "all" option is removed.  Conversely, if they remove all servers from the filter, it should once again act like "*". Here's a stripped-down version of what I'm trying to do:     <form version="1.1" theme="dark"> <label>My dashboard</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-5m</earliest> <latest>now</latest> </default> </input> <input type="multiselect" token="server" searchWhenChanged="true"> <label>server</label> <search> <query>| inputlookup server_lookup.csv</query> </search> <fieldForLabel>server</fieldForLabel> <fieldForValue>server</fieldForValue> <delimiter>, </delimiter> <default>*</default> </input> </fieldset> <row> <panel> <title>Some panel</title> <chart> <search> <query>index=* server_used IN ($server$) | stats median(some_value)</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">none</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">radialGauge</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.rangeValues">[0,10,30,100]</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.gaugeColors">["0x118832","0xcba700","0xd41f1f"]</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">right</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </form>    
I install Splunk universal Forwarder and ran it, it started but i am not able to access the http site after entering the ip with port number on linux
I wanted to use MLTK k-cluster algorithm to differentiate indexes which are slow usage growth as a cluster and sudden usage growth as a cluster and also predict indexe usages for next year  did some... See more...
I wanted to use MLTK k-cluster algorithm to differentiate indexes which are slow usage growth as a cluster and sudden usage growth as a cluster and also predict indexe usages for next year  did someone already did and exp
Hello to all, I have a multivalue field with a content.errormsg with values and also with a null value. If the null value in the fields it will not showing any results in the output example: ... See more...
Hello to all, I have a multivalue field with a content.errormsg with values and also with a null value. If the null value in the fields it will not showing any results in the output example: errormsg closed connection Empty String null needed result: errormsg closed connection Empty String
Hello, There was a user name that was changed and want to transfer ownership of splunk knowledge Object (Alerts) to her new account name . I will like to achieve this through  the cli and also the u... See more...
Hello, There was a user name that was changed and want to transfer ownership of splunk knowledge Object (Alerts) to her new account name . I will like to achieve this through  the cli and also the user changed her name and will want the new name to be applied to the knowledge object Pls how will i go about effecting this change.  
Getting below mentioned error in logs of appdynamics cluster agent  [ERROR]: 2024-03-15 14:36:30 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-15 14:36:30 - agentregistrationmo... See more...
Getting below mentioned error in logs of appdynamics cluster agent  [ERROR]: 2024-03-15 14:36:30 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-03-15 14:36:30 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-03-15 14:37:30 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-03-15 14:37:30 - agentregistrationmodule.go:131 - Failed to send agent registration request: Status: 401 Unauthorized, Body: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Unauthorized</title> </head> <body> HTTP Error 401 Unauthorized <p/> This request requires HTTP authentication </body> </html> I'm very new to appdynamics and I had make its implementation asap I'm running  a k8s cluster in minikue right now and configured the cluster agent through Helm chart  I assume there is some issue with the values I have set in the values.yml file  controllerInfo: url: "https://xxxxx.saas.appdynamics.com:443" account: [Redacted] username: [Redacted]  ( or complete email address should be used ?) password:  accessKey: (no issue in that I have verified it twice)  please help in resolving the issue    ^ Post edited by @Ryan.Paredez to remove sensitive account name and info. Please be careful not to share account names, emails, or passwords on Community posts. 
So the logs changed from typical jason to this ...   "message":"type=\"CLIENT_LOGIN\", realmId=\"xxx\", clientId=\"xxx\", userId=\"xxx" so splunk extracts for type this "\"   Now the searches do... See more...
So the logs changed from typical jason to this ...   "message":"type=\"CLIENT_LOGIN\", realmId=\"xxx\", clientId=\"xxx\", userId=\"xxx" so splunk extracts for type this "\"   Now the searches do not work anymore
Hi all, I'm looking at volume of indexes and how much they ingest to calculate the volumes of licenses. I am aware I could find this answer straight away but I like to investigate further     I'm ... See more...
Hi all, I'm looking at volume of indexes and how much they ingest to calculate the volumes of licenses. I am aware I could find this answer straight away but I like to investigate further     I'm not sure on how to construct a SPL search that looks at just metrics indexes and checks how much volume they use up of the daily licesning quota per day   Anyone can help me with this?
Hello, There was a user name that was changed and want to transfer ownership of splunk knowledge Object (Alerts) to her new account name . Pls how will i go about effecting this change 
Hi all, could someone please explain how licensing works for both Events and Metrics in Splunk Cloud. I've looked at other posts in the Splunk Community but they don't really make sense.    Would l... See more...
Hi all, could someone please explain how licensing works for both Events and Metrics in Splunk Cloud. I've looked at other posts in the Splunk Community but they don't really make sense.    Would love a fresh answer to see if there are any differences/updated explanations     Cheers
I'm on Splunk Enterprise 9.1.3, and I've configured the add-on (no proxy) with the SolarWinds server name, port, and credentials.  I've configured the inputs, and I see nothing.  Running tcpdump show... See more...
I'm on Splunk Enterprise 9.1.3, and I've configured the add-on (no proxy) with the SolarWinds server name, port, and credentials.  I've configured the inputs, and I see nothing.  Running tcpdump shows no traffic to the SolarWinds server or the configured port. This is what I see in the log:   2024-03-14 19:55:48,630 +0000 log_level=ERROR, pid=3098123, tid=Thread-4, file=ta_data_collector.py, func_name=index_data, code_line_no=113 | [stanza_name="ics_query"] Failed to index data Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/data_collection/ta_data_collector.py", line 109, in index_data self._do_safe_index() File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/data_collection/ta_data_collector.py", line 129, in _do_safe_index self._client = self._create_data_client() File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/data_collection/ta_data_collector.py", line 99, in _create_data_client self._data_loader.get_event_writer()) File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/splunktacollectorlib/ta_cloud_connect_client.py", line 20, in __init__ from ..core.pipemgr import PipeManager File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/__init__.py", line 1, in <module> from .engine import CloudConnectEngine File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/engine.py", line 6, in <module> from .http import HttpClient File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/http.py", line 26, in <module> 'http_no_tunnel': socks.PROXY_TYPE_HTTP_NO_TUNNEL, AttributeError: module 'socks' has no attribute 'PROXY_TYPE_HTTP_NO_TUNNEL' I'd appreciate any help in getting this working.  
I want to create statistic per group of device rather than individual devices. I tried eval, but it produced no result. | eval GR=case (host=5087, GR2, host=7750, GR1, host=7751, GR1, host=7752, G... See more...
I want to create statistic per group of device rather than individual devices. I tried eval, but it produced no result. | eval GR=case (host=5087, GR2, host=7750, GR1, host=7751, GR1, host=7752, GR2) | stats count by GR. TIA, Leon
Hello,   I am receiving these errors and my HF is not working properly. I think that it is something related to the SSL intercepction and the intermediate and root CA but I am not discovering it.... See more...
Hello,   I am receiving these errors and my HF is not working properly. I think that it is something related to the SSL intercepction and the intermediate and root CA but I am not discovering it. Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct. Last 50 related messages: 03-15-2024 08:14:15.748 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=34.216.133.150 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:15.530 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:15.296 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=44.231.134.204 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=44.231.134.204:9997 connid=0 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=35.162.96.25:9997 connid=0 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=34.216.133.150:9997 connid=0 03-15-2024 08:12:56.049 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2 This is my outputsconf [tcpout] defaultGroup = indexers [tcpout:indexers] server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com:9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com:9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997 forceTimebasedAutoLB = true autoLBFrequency = 40  
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various c... See more...
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various clients. For parsing windows logs the windows add-on is used which also provides a specific sourcetype. The problem is that for Windows clients we are unable to filter authentication events for: - Status (success/logoff/log failed) with EventCode:[4624->Logon success 4625->failure 4634->LogOff] - Account name. That is, we want to filter the logs that contain a certain substring in account name with the regex (always defining it within the whitelist where the event filter for the various eventcodes indicated above is contained). At present, events reach the master instance filtered only by eventcode rather than by eventcode and substring contained in the account name field. Could you help me?
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions sh... See more...
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions should give you the MAX value of a set of values, so in a thought scenario if you have 100 requests during 1 second the p95 should retrieve 95 of the requests with the lowest response time and out of these 95 requests it will pick out the highest response time as the p95 value. A thought scenario would be that the response time value of these 95 request were in the range of 50ms to 300ms. The p5 value would then be 300ms. I've used searches with p95 and p99 and thought this was correct but looking at the events I get out of both p95 and p99 the response time does not make any sense as this "300ms" value cannot be found, and very often I cannot find any close value to this number at all. Anyone that could enligthen me here in relation to the output I'm getting? Example of search: index=test host=server sourcetype=app_httpd_access AND "example" | bin _time span=1s | stats p99(A_1) as RT_p99_ms p95(A_1) as RT_p95_ms count by _time | eval RT_p95_ms=round(RT_p95_ms/1000,2) | eval RT_p99_ms=round(RT_p99_ms/1000,2)   p95 value output: 341,87ms Total number of values returned during 1 second for p95: 15 Response time output in ms (I was expecting value 341,87 on the TOP here but it's not present) : 343,69 330,675 329,291 301,369 279,018 246,719 106,387 103,216 100,232  44,794 44,496 42,491 38,974 38,336 34,201
我現在遇到一個問題,我在SH放置好一個apps並連到uf上,在uf上也有監控到資料路徑, 但我在search時就沒有辦法找 以下是我的 inputs.conf:     [monitor:///tutorialdata/www*/access.log] index = web host_segment=2 sourcetype = web:access [monitor:///tuto... See more...
我現在遇到一個問題,我在SH放置好一個apps並連到uf上,在uf上也有監控到資料路徑, 但我在search時就沒有辦法找 以下是我的 inputs.conf:     [monitor:///tutorialdata/www*/access.log] index = web host_segment=2 sourcetype = web:access [monitor:///tutorialdata/www*/secure.log] index = web host_segment=2 sourcetype = web:secure   以及props.conf   EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) [web:secure] EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)      
Hi All, I have a splunk cluster environment where, while pulling data from a source, itgets indexed twice, not as a separate event, but within same event. So all fields have same value coming twic... See more...
Hi All, I have a splunk cluster environment where, while pulling data from a source, itgets indexed twice, not as a separate event, but within same event. So all fields have same value coming twice , making it a multivalue field. Same source code works fine on a standalone splunk server but fails on a cluster.  I have tried to have props.conf present only in data app of indexer , however, with with that field extraction does not happen. If I keeps props.conf in both HF and data app, field extraction happens but with above issue. Would appreciate if anyone has any lead on this. TIA.