All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have this query, where I want to build a dataset from a variable and its 4 previous values. I can solve this like so:       | makeresults | eval id=split("a,b,c,d,e,f,g",",") | eval a=split("1,... See more...
I have this query, where I want to build a dataset from a variable and its 4 previous values. I can solve this like so:       | makeresults | eval id=split("a,b,c,d,e,f,g",",") | eval a=split("1,2,3,4,5,6,7",",") | eval temp=mvzip(id,a,"|") | mvexpand temp | rex field=temp "(?P<id>[^|]+)\|(?P<a>[^|]+)" | fields - temp | streamstats current=false last(a) AS a_lag1 | streamstats current=false last(a_lag1) AS a_lag2 | streamstats current=false last(a_lag2) AS a_lag3 | streamstats current=false last(a_lag3) AS a_lag4 | where isnotnull(a_lag4) | table id a*       However, if I want to extend this to say 100 previous values, this code would become convoluted and slow. I imagine there must be a better way to accomplish this goal, however my research has not produced any alternative. Any ideas are appreciated.
Hi, Our firewalls generate around 1000 High and Critical alerts daily. I would like to create uses related to these notifications but not sure what will be the best way to handle its number. Could s... See more...
Hi, Our firewalls generate around 1000 High and Critical alerts daily. I would like to create uses related to these notifications but not sure what will be the best way to handle its number. Could somebody advise what will be the best way to implement this please?
Hi there, what are the best practices to migrate from Azure sentinel to Splunk, we want to migrate sources, historical data and use cases.
Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in ... See more...
Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in 2 way: Legacy format (like  the original ones on AD) or XML. Is it possible to see them on JSON format? If yes, we can achieve this directly with above addon or we need other tools?
Without any concrete data it's just fortune telling. Check processes, check i/o saturation, check memory usage. Verify if it's even Splunk that's causing cpu hogging. Restarting processes blindly w... See more...
Without any concrete data it's just fortune telling. Check processes, check i/o saturation, check memory usage. Verify if it's even Splunk that's causing cpu hogging. Restarting processes blindly will not help much probably without addressing the underlying cause. Has anything been changed recently? Upgraded?
Whis is as you have 2 UF on same machine. Maybe you should only increase the limits.conf,     [thruput] maxKBps = <integer> * The maximum speed, in kilobytes per second, that incoming data is ... See more...
Whis is as you have 2 UF on same machine. Maybe you should only increase the limits.conf,     [thruput] maxKBps = <integer> * The maximum speed, in kilobytes per second, that incoming data is processed through the thruput processor in the ingestion pipeline. * To control the CPU load while indexing, use this setting to throttle the number of events this indexer processes to the rate (in kilobytes per second) that you specify. * NOTE: * There is no guarantee that the thruput processor will always process less than the number of kilobytes per second that you specify with this setting. The status of earlier processing queues in the pipeline can cause temporary bursts of network activity that exceed what is configured in the setting. * The setting does not limit the amount of data that is written to the network from the tcpoutput processor, such as what happens when a universal forwarder sends data to an indexer. * The thruput processor applies the 'maxKBps' setting for each ingestion pipeline. If you configure multiple ingestion pipelines, the processor multiplies the 'maxKBps' value by the number of ingestion pipelines that you have configured. * For more information about multiple ingestion pipelines, see the 'parallelIngestionPipelines' setting in the server.conf.spec file. * Default (Splunk Enterprise): 0 (unlimited) * Default (Splunk Universal Forwarder): 256     Since by deault it send at 256Kb/s. I set it to 2048 for many UFs which send much data. You could also try a 0 to disable thruput control.
Hi @Colloh  I would suggest you to open in incognito or clearer your broswer cookies and loginto STEP this would be temporary issue 
My script name was access-abc.sh ,  I just removed hyphen and renamed it to accessabc.sh and that fixed the issue and able to see the Data in Splunk. But now I have issue with event Formatting, Act... See more...
My script name was access-abc.sh ,  I just removed hyphen and renamed it to accessabc.sh and that fixed the issue and able to see the Data in Splunk. But now I have issue with event Formatting, Actual website data I am ingesting is shown below: ##### BEGIN STATUS ##### #LAST UPDATE  :  Tue,  28  Nov  2023  11:00:16  +0000 Abcstatus.status=ok Abcstatus.lastupdate=17xxxxxxxx555     ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### XXXX' XXXX   ###  xyxStatus  ### XXX XXX XXX . . . . So on....   But in splunk below lines are coming as a seperate events instead of being part of one complete event: ##### FIRST STATUS #####  - is coming as seperate event Abcstatus.status=ok  - this is also coming as a separate event   Below all events coming as one event which is correct and the above two lines should also be part of this one event: Abcstatus.lastupdate=17xxxxxxxx555 ###  ServiceStatus  ### xxxxx xxxxxx xxxx ###  SystemStatus  ### . . . So on.... #####   END STATUS  #####   Below is my props: DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE=TRUE BREAK_ONLY_AFTER = ^#{5}\s{6}END\sSTATUS\s{6}\#{5} MUST_NOT_BREAK_AFTER=\#{5}\s{5}BEGIN\sSTATUS\s{5}\#{5} TIME_PREFIX=^#\w+\s\w+\w+\s:\s MAX_TIMESTAMP_LOOKAHEAD=200   Can you please help me with the issue?    
Yes.  My problem actually grew worse with higher latency numbers since 9.1.1
Hi @tej57 , thanks a lot, you are right, searched parameter is there.  I noted a strange behavior and I don't now if you can help me on this: I noted that, before change the parameter, if I search o... See more...
Hi @tej57 , thanks a lot, you are right, searched parameter is there.  I noted a strange behavior and I don't now if you can help me on this: I noted that, before change the parameter, if I search on out Splunk, events are most but not all in xml format: That seems strange because, analyzing the inputs.conf on addon, all stanza has the parameter equals to true; so, how can be possible this? My suspect is that, in another inpust.conf or .conf file, that parameter is set to true.  How can I check this? May be usying the btool utility?
Hello, I'm implementing Splunk Security Essentials in an environment that already has detection rules, based on the Mitre Att&CK framework. I have correctly entered the datasources in Data Inventor... See more...
Hello, I'm implementing Splunk Security Essentials in an environment that already has detection rules, based on the Mitre Att&CK framework. I have correctly entered the datasources in Data Inventory and indicated them as "Availables". In Content > Custom Content, I added our detection rules by hand. I've specified the Tactics, and the Mitre Techniques and SubTechniques. I've also indicated their status in bookmarking, and some are "Successfully implemented". When I go to Analytics Advisor > Mitre ATT&CK Framework, I see the "Content (Available)" in the MITRE ATT&CK matrix, and it's consistent with our detection rules. But when I select Threat Groups, in "2.Selected Content", in "Total Content Selected", I get zero, whereas detection rules relate to the sub-techniques used by the selected Thread Groups. How can I solve this problem?
Hi, may I know any documentation on how tokens work when using them in javascript files. The docs at https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/tokens don't present much info on Javascrip... See more...
Hi, may I know any documentation on how tokens work when using them in javascript files. The docs at https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/tokens don't present much info on Javascript usage. Particularly, I am trying to use tokens to delete KV store values and I am confused how this can be done. Just using tokens.unset() is not working. Any help would be appreciated!!
We are setting up the splunk otel collector in ssl and authorization enabled solr. We are facing issue in passing the username and password for solr endpoint in the agent_config.yaml file.  Refer the... See more...
We are setting up the splunk otel collector in ssl and authorization enabled solr. We are facing issue in passing the username and password for solr endpoint in the agent_config.yaml file.  Refer the bbelow content of the config file, due to security reason, we have masked the hostname, userid, password details.   receivers: smartagent/solr: type: collectd/solr host: <hostname> port: 6010 enhancedMetrics: true exporters: sapm: access_token: "${SPLUNK_ACCESS_TOKEN}" endpoint: "${SPLUNK_TRACE_URL}" signalfx: access_token: "${SPLUNK_ACCESS_TOKEN}" api_url: "${SPLUNK_API_URL}" ingest_url: "${SPLUNK_INGEST_URL}" sync_host_metadata: true headers: username: <username> password: <password> correlation: otlp tls: insecure: false cert_file: <certificate_file>.crt key_file: <key_file>.key   Error Log :    -- Logs begin at Fri 2023-11-17 23:32:38 EST, end at Tue 2023-11-28 02:46:22 EST. -- Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/site-packages/sfxrunner/scheduler/simple.py", line 57, in _call_on_interval Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: func() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 194, in read_metrics Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: solr_cloud = fetch_collections_info(data) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 328, in fetch_collections_info Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: get_data = _api_call(url, data["opener"]) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 286, in _api_call Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: resp = urllib.request.urlopen(req) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 216, in urlopen Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: return opener.open(url, data, timeout) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 519, in open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: response = self._open(req, data) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 536, in _open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: result = self._call_chain(self.handle_open, protocol, protocol + Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 496, in _call_chain Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: result = func(*args) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 1377, in http_open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: return self.do_open(http.client.HTTPConnection, req) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 1352, in do_open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: r = h.getresponse() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 1378, in getresponse Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: response.begin() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 318, in begin Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: version, status, reason = self._read_status() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 300, in _read_status Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: raise BadStatusLine(line) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: [30B blob data] Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: [3B blob data] Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: {"kind": "receiver", "name": "smartagent/solr", "data_type": "metrics", "createdTime": 1700334408.8198304, "lineno": 56, "logger": "root", "monitorID": "smartagentsolr", "monitorType": "collectd/solr", "runnerPID": 1703, "sourcePath": "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/site-packages/sfxrunner/logs.py"} Nov 18 14:06:58 aescrsbsql01.scr.dnb.net otelcol[1035]: 2023-11-18T14:06:58.821-0500 error signalfx/handler.go:188 Traceback (most recent call last):
I actually tried this query also | eval 'Target Date' = strptime('Target Date', "%m/%d/%y") | eval _time = 'Target Date' | timechart span=1mon dc(sno) as Target   leval "Actual Date"=strptime('A... See more...
I actually tried this query also | eval 'Target Date' = strptime('Target Date', "%m/%d/%y") | eval _time = 'Target Date' | timechart span=1mon dc(sno) as Target   leval "Actual Date"=strptime('Actual Date', "%m/%d/%y") leval _time='Actual Date' | timechart span=1mon dc(sno) as Completed Istreamstats sum(Completed) as Completed] Istats values(*)as'* by _time   @gcusello 
My target is 100. If anything is completed completed line graph should populate  @gcusello 
Hi @Muthu_Vinith , I suppose that the status=completed is an event with a timestamp, so you could take the earliest and latest timestamps in your events: <your_search> | stats earliest(_time) A... See more...
Hi @Muthu_Vinith , I suppose that the status=completed is an event with a timestamp, so you could take the earliest and latest timestamps in your events: <your_search> | stats earliest(_time) As earliest latest(_time) AS latest count BY status | append [ | makeresults | eval status="Not Started", count=0 | fields status count ] | append [ | makeresults | eval status="Progress", count=0 | fields status count ] | append [ | makeresults | eval status="Wip Progress", count=0 | fields status count ] | append [ | makeresults | eval status="Completed", count=0 | fields status count ] | stats values(earliest) AS earliest values(latest) AS latest sum(count) AS total BY status | eval status=if(total=0,"NA",status), earliest=strftime(earliest,"%m/%d%y"), latest=strftime(latest,"%m/%d%y") Ciao. Giuseppe
Hi! I am trying to evaluate AppDynamics for monitoring IIS-sites but I get the error "unable to create application" when I try to create an application. / FKE
I'm working on visualizing completion versus target date in Splunk, and I'm facing a challenge because there's no completion level specified in my data. I have the target date and actual date,in act... See more...
I'm working on visualizing completion versus target date in Splunk, and I'm facing a challenge because there's no completion level specified in my data. I have the target date and actual date,in actual date there is no dates mentioned and I want to create a chart that shows the progress towards the target date.   I've tried the following search query:  eval 'Target Date' = strptime('Target Date', "%m/%d/%y") eval _time = 'Target Date' timechart span=1mon dc(sno) as Target Could please guide me on how to modify this query or suggest an alternative approach to visualize completion versus target date when completion data is absent? @gcusello @bowesmana   
| rex "(?<username>[^:]*):(?<passwd>[^:]*):(?<path>[^:]*)"
Hi @Muthu_Vinith , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karm... See more...
Hi @Muthu_Vinith , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors