All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal S... See more...
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal Splunk logs to index cluster B. What's the simplest app package I can deploy via the deployment server to send a 2nd set of all internal logs from universal forwarders to index cluster B?
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detai... See more...
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detail Diagram - Standalone Splunk" , the queues are laid out like this (one example): (persistentQueue) + udp_queue --> parsingQueue --> aggQueue --> typingQueue --> indexQueue  So lets say we have UDP input configured and some congestion occurs in typingQueue, the persistentQueue should still be able to hold the data until the congestion is cleared up. This should be able to prevent the data loss.. right?  Sorry for this loaded assumption based question.. I am trying to figure out what we can do to stop UDP input data from getting dropped due to the typingQueue being filled. (P.S. adding an extra HF is not an option right now).   Thanks in advance!
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the ... See more...
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the following error message back, “Unexpected error when Validating VirusTotal API Key: 'ta_virustotal_app_settings' We currently have Splunk Cloud 9.0.2303.201 and VT4Splunk 1.6.2   Any assistance you all can provided will be greatly appreciated!
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc... See more...
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc Satisfied Conditions: bcd, ABC, 123, abc Satisfied Conditions: XYZ, ABC, 456, abc then output shall be: Condition Count XYZ 3 ABC 3 abc 4 bcd 2 123 3 456 1   I am almost there till retrieving data column wise but not able to get it. Any inputs here would be helpful.
@_JP on current setting part i am kind of good with below query  | rest splunk_server=local /services/authentication/users | fields title, roles | mvexpand roles | append [ | rest splunk_serve... See more...
@_JP on current setting part i am kind of good with below query  | rest splunk_server=local /services/authentication/users | fields title, roles | mvexpand roles | append [ | rest splunk_server=local /services/authorization/roles | fields title srchDiskQuota rtSrchJobsQuota srchJobsQuota cumulativeSrchJobsQuota cumulativeRTSrchJobsQuota | rename title as roles] | stats values(srchDiskQuota) as srchDiskQuota, values(rtSrchJobsQuota) as rtSrchJobsQuota, values(srchJobsQuota) as srchJobsQuota, values(cumulativeSrchJobsQuota) as cumulativeSrchJobsQuota, values(title) as userid, values(cumulativeRTSrchJobsQuota) AS cumulativeRTSrchJobsQuota by roles | mvexpand userid | stats values(srchDiskQuota) as srchDiskQuota, values(rtSrchJobsQuota) as rtSrchJobsQuota, values(srchJobsQuota) as srchJobsQuota, values(cumulativeSrchJobsQuota) as cumulativeSrchJobsQuota,values(cumulativeRTSrchJobsQuota) AS cumulativeRTSrchJobsQuota by userid roles just want to get details on current utilization by user/role & more of search concurrency settings (resource utilization etc)
The ODBC driver to enable PowerBI to connect with Splunk on SplunkBase is only the Mac OS version. Can the Windows version be made available?
I upgraded my SE from 7.2.4 to 8.2.8 afterwards I upgraded my apps and addon as per compatibility. but some addons stopped working and solarwinds addon is one of them. I am getting below errors: ... See more...
I upgraded my SE from 7.2.4 to 8.2.8 afterwards I upgraded my apps and addon as per compatibility. but some addons stopped working and solarwinds addon is one of them. I am getting below errors: 10-26-2023 18:10:04.720 +0000 ERROR AdminManagerExternal [20948 TcpChannelThread] - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 179, in all\n **query\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 1244, in request\n raise HTTPError(response)\nsolnlib.packages.splunklib.binding.HTTPError: HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 150, in init\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunk_aoblib/rest_migration.py", line 39, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 40, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 122, in wrapper\n raise RestError(exc.status, str(exc))\nsplunktaucclib.rest_handler.error.RestError: REST Error [404]: Not Found -- HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n 10-20-2023 18:45:23.444 +0000 ERROR ModularInputs [15755 MainThread] - Unable to initialize modular input "solwarwinds_query" defined in the app "Splunk_TA_SolarWinds": Introspecting scheme=solwarwinds_query: script running failed (PID 15889 exited with code 1).   10-20-2023 18:45:23.443 +0000 ERROR ModularInputs [15755 MainThread] - <stderr> Introspecting scheme=solwarwinds_query: File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/cacerts/ca_certs_locater.py", line 59, in _fallback    
Hi @Sid, I wasn't able to get the solution yet. Thank you
Syslog  often does not play well with loadbalancers. Especially when sent by tcp. So it might just be that your LB is doing something ugly to your events. With such setup I'd start with a tcpdump on ... See more...
Syslog  often does not play well with loadbalancers. Especially when sent by tcp. So it might just be that your LB is doing something ugly to your events. With such setup I'd start with a tcpdump on the indexer to verify what is actually getting to the network input.
Yes, this is exactly what I need! I tried to play with eval also, but without that case function. Thank you for your help, much appreciated!
Try something like this (obviously expand the case function to cover the other renames) |`incident_review` | stats count by disposition | eval disposition=case(disposition=="disposition:1","true-pos... See more...
Try something like this (obviously expand the case function to cover the other renames) |`incident_review` | stats count by disposition | eval disposition=case(disposition=="disposition:1","true-positive",disposition=="disposition:2","false-positive",true(),disposition)
Are you interest in this user info in context of the users for your Splunk environment, or are you looking at some other data to analyze the users? For Splunk, you can start with SPL that will que... See more...
Are you interest in this user info in context of the users for your Splunk environment, or are you looking at some other data to analyze the users? For Splunk, you can start with SPL that will query the REST interface, like this: | rest /services/authentication/users   If you want information on a particular user (e.g. fred), you can specify that name in the REST call like this: | rest /services/authentication/users/fred You can get a lot of info on what capabilities they have and other metadata about that user.   More info here.  
Hello   please, I want to know if there is a way to display legends in the calendar heatmap application directly without requiring a mouseover on the rectangles (circles).
Block storage
Here's one option.  It uses the appendpipe command to override the fields if there are no results. [search index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp... See more...
Here's one option.  It uses the appendpipe command to override the fields if there are no results. [search index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True="✔" | bin _time span=1d | dedup _time | eval EBNCStatus="ebnc event balanced successfully" | appendpipe [stats count | eval True=" ", EBNCStatus="" | where count=0 | fields - count] | table EBNCStatus True  
Any inputs.conf file other than /opt/splunkforwarder/etc/system/default/inputs.conf. Best practice is to create your own app (/opt/splunkforwarder/etc/apps/org_aide_inputs, for example) and put the ... See more...
Any inputs.conf file other than /opt/splunkforwarder/etc/system/default/inputs.conf. Best practice is to create your own app (/opt/splunkforwarder/etc/apps/org_aide_inputs, for example) and put the inputs.conf file there.
I am using very simple query: |`incident_review` | stats count by disposition I get table like this: When I make bar chart it looks like this: What I am trying to do is same bar chart, bu... See more...
I am using very simple query: |`incident_review` | stats count by disposition I get table like this: When I make bar chart it looks like this: What I am trying to do is same bar chart, but instead of disposition:1, disposition:2..., I would like to see there values of these dispositions so for example true-positive, false-positive... I tried to use "rename as" like this, but it doesnt work - output is same bar chart as above |`incident_review` | stats count by disposition | rename disposition:1 as true-positive  
It's strange because i have opened the port 8088 correctly in Windows Defender but when i am running the netstat command i can see it opened....
Hi @pm, follow the instructions at https://docs.splunk.com/Documentation/Forwarder/9.1.1/Forwarder/Installanixuniversalforwarder#Install_on_Linux You ahve to follow the deb procedure. Ciao. Giuse... See more...
Hi @pm, follow the instructions at https://docs.splunk.com/Documentation/Forwarder/9.1.1/Forwarder/Installanixuniversalforwarder#Install_on_Linux You ahve to follow the deb procedure. Ciao. Giuseppe
hi i am windows user  i am trying to install universal forwarders in ubuntu i am a windows user can anyone share like to download and steps please