All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers I have a below query, that am trying to get a count by field values, I am working on creating a dynamic dashboard that I have the below three fields as three  dropdown inputs.  So... See more...
Hello Splunkers I have a below query, that am trying to get a count by field values, I am working on creating a dynamic dashboard that I have the below three fields as three  dropdown inputs.  So, How can I make the token dynamic here, like when I choose only one value from one of the three dropdown inputs that value should populate on the stats command token. Thanks in advance! index=<> sourcetype=<> | search Type=$Token$ | search User=$Token$ | search Site=$Token$ | stats count by $Token$
I have a field value in splunk with the below format  :-    field_X = "AB 012 - some text here! ---- HOST INFORMATION: ---- Source: 1.1.2.3 ---- DETAILS: -- Destination ports: 777 33 -- Occurrences... See more...
I have a field value in splunk with the below format  :-    field_X = "AB 012 - some text here! ---- HOST INFORMATION: ---- Source: 1.1.2.3 ---- DETAILS: -- Destination ports: 777 33 -- Occurrences: 2244 -- Destination ip counts: 146 -- Actions: blocked -- Order Techniques : X3465 " Now How can I split the abpve field value into multiple lines to make it more user redable using eval and regex field_X = AB 012 - some text here! HOST INFORMATION: Source: 1.1.2.3 DETAILS: Destination ports: 777 33 Occurrences: 2244 Destination ip counts: 146 Actions: blocked Order Techniques : X3465   All I wanted is replace "--" with a line space or something to divide the field into multiple lines from 1 line?
1. I have installed universal forwarder and have a Splunk cloud account. 2. Installed Splunk using this command /opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl. 3. restarted to... See more...
1. I have installed universal forwarder and have a Splunk cloud account. 2. Installed Splunk using this command /opt/splunkforwarder/bin/splunk install app /tmp/splunkclouduf.spl. 3. restarted to get changes into effect. no logs in Splunk cloud index= "*" found nothing
So I have a macro that has a field variable that I want to use a wildcard and worse the field names tend to have dots.  So a good field would be body.system.diskio.write.bytes and I tried using the f... See more...
So I have a macro that has a field variable that I want to use a wildcard and worse the field names tend to have dots.  So a good field would be body.system.diskio.write.bytes and I tried using the following: LIKE($field$, "body_system_diskio%") with the idea is if would error if the field did not at least contain body.system.diskio.  I put the underscores in as im not sure it could handle the dots.  This does not work for me.  Anyone know what im doing wrong here?   EDITED :  I only had two options for conditionals and ended up getting it to work with match($BodySystemDiskIoBytes$, "body.system.diskio.write.bytes|body.system.diskio.read.bytes")
Team, Time difference between end_task_date and start_task_date is coming null. Could you please take a look below and let me know what's wrong in my query. SPL: index=cloud sourcetype=lambda:A... See more...
Team, Time difference between end_task_date and start_task_date is coming null. Could you please take a look below and let me know what's wrong in my query. SPL: index=cloud sourcetype=lambda:Airflow2Splunk "\"logGroup\"" "\"airflow-OnePIAirflowEnvironment-DEV-Task\"" | rex field=_raw "Marking task as (?<status>[^\.]+)" | where status IN("FAILED", "SUCCESS", "UP_FOR_RETRY") | rex field=_raw "dag_id=(?<dag_id>\w+)" | rex field=_raw "task_id=(?<task_id>\w+)" | rex field=_raw "start_date=(?<task_start_date>\d{8}T\d{6})" | rex field=_raw "end_date=(?<task_end_date>\d{8}T\d{6})" | eval start_task_date=strptime(task_start_date,"%Y%m%dT%H%M%S") | eval start_task_date=strftime(start_task_date,"%Y-%m-%d %H:%M:%S") | eval end_task_date=strptime(task_end_date,"%Y%m%dT%H%M%S") | eval end_task_date=strftime(end_task_date,"%Y-%m-%d %H:%M:%S") | eval diff=end_task_date-start_task_date | eval diff=strftime(diff, "%H:%M:%S") | sort - _time | table dag_id task_id status start_task_date end_task_date diff the value for diff column is coming null. for other columns i'm getting the data correctly. Here are sample values for date fields: end_task_date: 2022-04-06 20:51:11  2022-04-06 20:54:09 end_task_date: 2022-04-06 20:51:09 2022-04-06 20:52:07 end_date: 20220406T205111 20220406T205409 start_date: 20220406T205109 20220406T205207 Please let me know if you need additional information pertaining to this request. Thanks, Sumit
I upgraded the Heavy Forwarders in my environment to Splunk enterprise 8.2.5 and figured out today on the day of upgrade that I stopped receiving data in one of my indexes.    By searching  event... See more...
I upgraded the Heavy Forwarders in my environment to Splunk enterprise 8.2.5 and figured out today on the day of upgrade that I stopped receiving data in one of my indexes.    By searching  events in the index prior to my upgrade, I was able to figure out that the host the events are being received from is running Windows 2008 R2 (running a Splunk UF version 7.2.2) - that may have something to do with this. I am trying to further troubleshoot and figure out how the data is being brought into that index but I am not a seasoned splunk veteran by any means.    Searching around for answers to this has been a bit convoluted. Could anyone help me through the process of tracking down how that data is being brought into that index? I'm thinking this may have something to do with lack of compatibility for HTTPS from the host to the heavy forwarder. Any help or guidance is much appreciated.
I have the following data :  query="select field  from table where (status!="Y")  and ids.id IN ["123","145"] limit 500" params="{}" How can I extract the field query ignoring the special charact... See more...
I have the following data :  query="select field  from table where (status!="Y")  and ids.id IN ["123","145"] limit 500" params="{}" How can I extract the field query ignoring the special characters?  My query .... | table query params Is cut off when it reaches the status field.  How can I extract everything regardless of the special characters?  Is there any recommended format that I should change my logs to make them easier to parse?  
Hello Splunkers, I have data where the index time is different from the actual file.The source has the correct date and time.I want  splunk to use it as index time and if that is not possible I want... See more...
Hello Splunkers, I have data where the index time is different from the actual file.The source has the correct date and time.I want  splunk to use it as index time and if that is not possible I want to extract date and time from the source field ,create new field and use that as date and time field.Below is the example of the source file.   Source: /admin/logs/abc/inventory/04-04-2022-101634-all-b5.xxx   1. I want splunk to take 04-04-2022-101634 and use it as indexed time like  04-04-2022 -10:16:34(dd-mm-yyyy-hh:mm:ss) . I want the props.conf  2. Also if the data is indexed without using the source file .I want to extract teh date and time from the source and create and new field called correct_time as use as _time     Thanks in Advance
I'm trying to understand the different capabilities within Splunk to see how they can be used for my advantage. Was exploring apps_backup and apps_restore splunk capabilities to see if they can be... See more...
I'm trying to understand the different capabilities within Splunk to see how they can be used for my advantage. Was exploring apps_backup and apps_restore splunk capabilities to see if they can be used with API for backing up and restore of specific apps without the actual back-end access and having to replicate the "$SPLUNK_HOME/etc/" directory. There is one mention of apps_restore within the below document mentioning it can be used on the endpoint "apps/restore": https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Authorizeconf I noticed the restmap.conf also has entries for "apps/backup" and "apps/restore" but I could find no documentation on the usage of these endpoints and methods. Could someone point me in the right direction here?
I am trying to use the rest endpoints for the Microsoft Azure Add-on for Splunk TA-MS-AAD When posting the following:   localhost:8089/servicesNS/nobody/TA-MS-AAD/admin/TA_MS_AAD_account/te... See more...
I am trying to use the rest endpoints for the Microsoft Azure Add-on for Splunk TA-MS-AAD When posting the following:   localhost:8089/servicesNS/nobody/TA-MS-AAD/admin/TA_MS_AAD_account/testacct/create?username=myazidhere&password=lslslslslsslsl I receive following error: "ERROR">&lt;class 'splunk.admin.BadProgrammerException'&gt;: This handler claims to support this action (4), but has not implemented it.   As you can see, I am trying to create our credentials via this API to support automated installation/configuration of this add on   Has anyone been able to use the rest endpoints with this Add On? any consumer examples would be great - thanks  
I have Splunk_TA_nix installed and ps.sh enabled on my Apache storm nimbus instances.  I can run a general ps sourcetype query on a service I know should always be running like rhnsd and get events b... See more...
I have Splunk_TA_nix installed and ps.sh enabled on my Apache storm nimbus instances.  I can run a general ps sourcetype query on a service I know should always be running like rhnsd and get events back just fine ...     index=os host="my-stormn-1" sourcetype=ps rhnsd      However, when I do the same for the "stormnimbus" service I get zero events back ...     index=os host="my-stormn-1" sourcetype=ps stormnimbus     Meanwhile, a "sudo systemctl status stormnimbus" on the my-stormn-1 instance itself shows that it is active and running.  I'm having the same problem also with the stormui service as well as the stormsupervisor service running on my storm supervisor instances.  I should note that I do have Splunk_TA_nix installed on my splunk indexers.  Any advice as to why these services are not returning events with ps and how to fix it would be greatly appreciated.
Hi Splunk Community, I am trying to remove the data in a field after the first period. my field looks like this: 24611_abce.XXX.AAA.com 24612_r1q2e3.XXX.AAA.com null null ... See more...
Hi Splunk Community, I am trying to remove the data in a field after the first period. my field looks like this: 24611_abce.XXX.AAA.com 24612_r1q2e3.XXX.AAA.com null null 4iop45_q7w8e9.XXX.AAA.com hki90lhf3_m1n2b3.QQQQ.AAA.com   I would like to remove everything after the first period for every row but the pattterns at the end do not match after the first period. It should look like this:  24611_abce 24612_r1q2e3 null null 4iop45_q7w8e9 hki90lhf3_m1n2b3   thanks in advance!
This seems to me like it should be super simple (looker, tableau, etc) but I've been working at this for almost 2 days and I'm getting nowhere, I would be very appreciative if anyone could help. I'm... See more...
This seems to me like it should be super simple (looker, tableau, etc) but I've been working at this for almost 2 days and I'm getting nowhere, I would be very appreciative if anyone could help. I'm trying to get: Chart the percentage difference between count of _time (ie. count of records) and a simple moving average of the last 5 days on the Y axis and time (spans) on the X, where response_code>200 by path I'll paste an example of where I'm at, but I know I'm not even close. Can I get any tips please?     index=k8s_events namespace=ecom-middleware NOT method=OPTIONS response_code>200 | streamstats avg(count(_time)) as cTime window=5 | table _time path cTime | timechart usenull=f span=8h avg(cTime) By path    
We are installing a custom made app that contains some symlinks, but were having the following problem: Installing it from the web GUI removes all the links breaking the app. But installing it by c... See more...
We are installing a custom made app that contains some symlinks, but were having the following problem: Installing it from the web GUI removes all the links breaking the app. But installing it by copying the app dir into '$SPLUNK_HOME/etc/apps'  keeps the symlinks intact, and the app works. Is this intended behaviour? Is there anyway to use symlinks inside an app? Regards, Javier.
Dear all, We have a controller C1  in which database D1 is cataloged and using agent A1.  I need to see D1 from a new controller called C2 but I couldn't see agent A1 in C2. Can someone help ... See more...
Dear all, We have a controller C1  in which database D1 is cataloged and using agent A1.  I need to see D1 from a new controller called C2 but I couldn't see agent A1 in C2. Can someone help me with this? ^ Post edited by @Ryan.Paredez for formatting
The latest version of the Splunk Add-on for AWS has changed the JSON for the "AWS Description" ingest; see examples below. My question is about selecting values from this new 'type' of array. Before... See more...
The latest version of the Splunk Add-on for AWS has changed the JSON for the "AWS Description" ingest; see examples below. My question is about selecting values from this new 'type' of array. Before, you could select particular values with the following search syntax: tags.Name = "server1" QUESTIONS 1. How do I make the same search with the newer JSON? 2. What is the technical description for these 2 different forms of arrays? BEFORE tags: { [-]      Environment: test      Name: server1 AFTER Tags: [ [-]      { [-]        Key: Environment        Value: test      }      { [-]        Key: Name        Value: server1      }
EventGen v7.2.1 throws the following exception - Python 3.9.2 DockerImage: nginx eventgen 2022-04-06 15:08:42 eventgen ERROR MainProcess Unexpected character in found when decoding ... See more...
EventGen v7.2.1 throws the following exception - Python 3.9.2 DockerImage: nginx eventgen 2022-04-06 15:08:42 eventgen ERROR MainProcess Unexpected character in found when decoding object value Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 136, in updateConfig self.httpeventServers = json.loads(config.httpeventServers) ValueError: Unexpected character in found when decoding object value 2022-04-06 15:08:42 eventgen ERROR MainProcess 'HTTPEventOutputPlugin' object has no attribute 'serverPool' Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 250, in _sendHTTPEvents self._transmitEvents(stringpayload) File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 261, in _transmitEvents targetServer.append(random.choice(self.serverPool)) AttributeError: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' 2022-04-06 15:08:42 eventgen ERROR MainProcess failed indexing events, reason: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' .conf file [cyclical.csv] mode=sample interval=60 count=1 outputMode=httpevent httpeventServers = {"servers": [{"protocol": "https", "port": "8088", "key": "0617eea5-87a9-4d18-8ed4-6dc085ddbe2c"", "address": "172.19.15.140"}]} index=main sourcetype=eventgen sampletype=csv source=eventgen_cyclical I am running it like this v7.2.1 throws the following exception. Python:3.9.2 DockerImage:nginx eventgen 2022-04-06 15:08:42 eventgen ERROR MainProcess Unexpected character in found when decoding object value Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 136, in updateConfig self.httpeventServers = json.loads(config.httpeventServers) ValueError: Unexpected character in found when decoding object value 2022-04-06 15:08:42 eventgen ERROR MainProcess 'HTTPEventOutputPlugin' object has no attribute 'serverPool' Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 250, in _sendHTTPEvents self._transmitEvents(stringpayload) File "/usr/local/lib/python3.9/dist-packages/splunk_eventgen/lib/plugins/output/httpevent_core.py", line 261, in _transmitEvents targetServer.append(random.choice(self.serverPool)) AttributeError: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' 2022-04-06 15:08:42 eventgen ERROR MainProcess failed indexing events, reason: 'HTTPEventOutputPlugin' object has no attribute 'serverPool' EventGen conf file [cyclical.csv] mode=sample interval=60 count=1 outputMode=httpevent httpeventServers = {"servers": [{"protocol": "https", "port": "8088", "key": "0617eea5-87a9-4d18-8ed4-6dc085ddbe2c"", "address": "172.19.15.140"}]} index=main sourcetype=eventgen sampletype=csv source=eventgen_cyclical Running it using python3 -m splunk_eventgen -v generate -s cyclical.csv eventgen.conf
Hi All, I have to send Splunk Cloud logs to S3 buckets after the 90 days log retention in Splunk for audit purpose. Can someone point me how to achieve this and if there are any documentation for su... See more...
Hi All, I have to send Splunk Cloud logs to S3 buckets after the 90 days log retention in Splunk for audit purpose. Can someone point me how to achieve this and if there are any documentation for such please let me know?    Thanks in Advance!
Hello, We had an issue where where a DB Input we have fell behind in fetching the events.  We seen that a few days ago the "Input Jobs Median Duration over Time"  chart on the "DB Connect Input Per... See more...
Hello, We had an issue where where a DB Input we have fell behind in fetching the events.  We seen that a few days ago the "Input Jobs Median Duration over Time"  chart on the "DB Connect Input Performance" Dashboard went from 0 to over 200. Is there a Search that can be done to obtain the Median of duration?  I would love to create an alert for if this happens again.    
With little to no Splunk experience, I inherited a 7.2.3 windows deployment (We're closed network and I'm not cleared to upgrade yet) I've been finding little things here and there. One of the bigg... See more...
With little to no Splunk experience, I inherited a 7.2.3 windows deployment (We're closed network and I'm not cleared to upgrade yet) I've been finding little things here and there. One of the bigger ones being I'm ONLY getting _Audit logs from the Splunk servers; I'm not getting any audit input from any work stations, or other production servers. I've been dredging the boards for 3 days now and haven't found anything that seems along this line. I've checked the %Splunk\var\log\audit.log on several and the host's audit logs are getting input, but they're not getting ingested. I've gone through the deployment_app input.conf and output.conf files and don't see any glaring indications. So, I'm asking for ideas on other things to check.