All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have created a custom event which gives me data about the top running sqls. However, when I create an alert on it, it only gives me header information and not the event details. Can you pl... See more...
Hi All, I have created a custom event which gives me data about the top running sqls. However, when I create an alert on it, it only gives me header information and not the event details. Can you please help me understand how to get event details in email. Thanks.
Hi All,  What are our options if we are not content with the way a TA extracts fields out of our raw data ?  We  are seeing issues with the way AWS Add-on extracts the values for one of the log sour... See more...
Hi All,  What are our options if we are not content with the way a TA extracts fields out of our raw data ?  We  are seeing issues with the way AWS Add-on extracts the values for one of the log sources from AWS .  We are using the latest version of their TA as well.   What can we do from our side to correct the field extractions  ?   AWS Data comes in JSON format and one of the fields is messed up.
Hello, I have one indexer cluster that receives data over inputs.conf [splunktcp://9997]. I want to clone all data received by this indexer cluster on this port to another Splunk instance, which ... See more...
Hello, I have one indexer cluster that receives data over inputs.conf [splunktcp://9997]. I want to clone all data received by this indexer cluster on this port to another Splunk instance, which also listens on 9997. I understand this will double my license consumption. Current: UF --> Indexer (stores all data) Desire: UF --> Indexer (stores all data) --> Other Indexer (also stores all data) How can I clone all data received on 9997 from one indexer to another? Thanks
I'm trying to spit event into multiple events,my raw event like below <14>1 2022-09-14T12:49:12.620+08:00 TestServer mft 3491 SFTP Audit Log [gamft-sftp@46583 mftcommand="Connect" mftend_time="2022-... See more...
I'm trying to spit event into multiple events,my raw event like below <14>1 2022-09-14T12:49:12.620+08:00 TestServer mft 3491 SFTP Audit Log [gamft-sftp@46583 mftcommand="Connect" mftend_time="2022-09-14 12:49:12 PM" mftevent_type="Connection Successful" mftremote_ip="192.168.168.168" mftstart_time="2022-09-14 12:49:12 PM"]<14>1 2022-09-14T12:49:12.620+08:00 TestServer mft 3491 SFTP Audit Log [gamft-sftp@46583 mftcommand="Connect" mftend_time="2022-09-14 12:49:12 PM" mftevent_type="Connection Successful" mftremote_ip="192.168.168.168" mftstart_time="2022-09-14 12:49:12 PM"]<14>1 2022-09-14T12:49:12.727+08:00 TestServer mft 3491 SFTP Audit Log [gamft-sftp@46583 mftcommand="Login" mftend_time="2022-09-14 12:49:12 PM" mftevent_type="Login Successful" mftremote_ip="192.168.168.168" mftstart_time="2022-09-14 12:49:12 PM" mftuser_name="testuser"] -------------------------------------------------------------------- I want to split it into three events, how can I do this? <14>1 2022-09-14T12:49:12.620+08:00 TestServer mft 3491 SFTP Audit Log [gamft-sftp@46583 mftcommand="Connect" mftend_time="2022-09-14 12:49:12 PM" mftevent_type="Connection Successful" mftremote_ip="192.168.168.168" mftstart_time="2022-09-14 12:49:12 PM"] <14>1 2022-09-14T12:49:12.620+08:00 TestServer mft 3491 SFTP Audit Log [gamft-sftp@46583 mftcommand="Connect" mftend_time="2022-09-14 12:49:12 PM" mftevent_type="Connection Successful" mftremote_ip="192.168.168.168" mftstart_time="2022-09-14 12:49:12 PM"] <14>1 2022-09-14T12:49:12.727+08:00 TestServer mft 3491 SFTP Audit Log [gamft-sftp@46583 mftcommand="Login" mftend_time="2022-09-14 12:49:12 PM" mftevent_type="Login Successful" mftremote_ip="192.168.168.168" mftstart_time="2022-09-14 12:49:12 PM" mftuser_name="testuser"]
Hello Splunkers, I am seeing some some difference in setting up configurations (Configuration tab) in On-Prem vs Splunk Cloud for the Website Monitoring application. "Proxy Server" and "Proxy Ser... See more...
Hello Splunkers, I am seeing some some difference in setting up configurations (Configuration tab) in On-Prem vs Splunk Cloud for the Website Monitoring application. "Proxy Server" and "Proxy Server Authentication" configurations are both available in Splunk On-Prem which aren't in Splunk Cloud. Only the "Advanced" configuration shows in both platforms. Is anyone seeing the same? or Is this intended since the platform is Splunk Cloud? The context on this question is  doing an app migration to Splunk Cloud and observing the experience compared to On-Prem. Thinking if Proxy Server settings are not anymore needed for the application in Splunk Cloud. Thus this difference. Thanks in advance. Kind Regards, Ariel    
Hi all, I am trying to extract field ABDEF-999 in the name Id. But its not extracting when I use below commands. Could someone guide on what's the mistake in following rex. |rex field="line" "\... See more...
Hi all, I am trying to extract field ABDEF-999 in the name Id. But its not extracting when I use below commands. Could someone guide on what's the mistake in following rex. |rex field="line" "\"Testcode\":\"(?<id>[^\"]*)\""|table id   Extracting from =   \\\"Testcode\\\":\\\"ABDEF-999\\\"
Hello, Is there a feature roadmap available? I'm loving the new dashboard studio for designing some of the projects I'm working on, but sadly find it unusable due to lacking of several main feature... See more...
Hello, Is there a feature roadmap available? I'm loving the new dashboard studio for designing some of the projects I'm working on, but sadly find it unusable due to lacking of several main features (export to PDF and old drilldown options). Thanks!
Hello, I have a plan to upgrade spunk to version 9.* from 8.2, but we are using Splunk Universal Forwarder 7.1.0. Splunk enterprise will be compatible with Splunk Universal Forwarder 7.1.0 or do we... See more...
Hello, I have a plan to upgrade spunk to version 9.* from 8.2, but we are using Splunk Universal Forwarder 7.1.0. Splunk enterprise will be compatible with Splunk Universal Forwarder 7.1.0 or do we have to upgrade Splunk Universal Forwarder to version 9.*?
If I have a simple dashboard with a time range picker input, how can I add source code to convert the picker selection to a StartDate and EndDate token.  StartDate = strftime(earliest, %m/%d/%Y %H:%M... See more...
If I have a simple dashboard with a time range picker input, how can I add source code to convert the picker selection to a StartDate and EndDate token.  StartDate = strftime(earliest, %m/%d/%Y %H:%M:%S), EndDate=strftime(latest, %m/%d/%Y %H:%M:%S)   { "visualizations": { "viz_ZgRiQCoQ": { "type": "viz.column", "options": {}, "dataSources": { "primary": "ds_GHdtwfg5" } } }, "dataSources": { "ds_GHdtwfg5": { "type": "ds.search", "options": { "query": "index=_internal \n| top 100 sourcetype" }, "name": "Search_1" } }, "defaults": { "dataSources": { "global": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "visualizations": { "global": { "showLastUpdated": true } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": {}, "structure": [ { "item": "viz_ZgRiQCoQ", "type": "block", "position": { "x": 0, "y": 0, "w": 300, "h": 300 } } ], "globalInputs": [ "input_global_trp" ] }, "title": "Global time range picker", "description": "" }   And then be able to display StartDate and EndDate on the top of the dashboard, so if the user selects Last 24 hrs or Last 30 days - it can be displayed as 09/12/2022 - 09/13/2022 or 08/14/2022 - 09/13/2022?
Splunk HEC and iOS/HomeKit Shortcuts A number of years ago the PM for HEC happen to sit behind me at a conf keynote. Glenn leaned forward and said you’re going to love this. He was right, I fell in... See more...
Splunk HEC and iOS/HomeKit Shortcuts A number of years ago the PM for HEC happen to sit behind me at a conf keynote. Glenn leaned forward and said you’re going to love this. He was right, I fell in love with HEC right away.  Few months later I was giving him grief about where the HEC example code was for Python because the RaspberryPi universal forward was not getting love at the time. He replied it’s just JSON and Post just write it. So I did and made a HEC python class a number of folks still use. (GitHub - georgestarcher/Splunk-Class-httpevent: Python class to submit events to Splunk HTTP Event Collector) Recently, I was messing with a lot of iOS shortcuts (https://support.apple.com/guide/shortcuts/welcome/ios) automating things on my phone and my home. I wondered what if I posted JSON to the SplunkTrust (https://www.splunk.com/en_us/community/splunk-trust.html ) SpunkCloud instance. Could I do it easily and natively within shortcuts? The short answer is YES! You need to remember HEC was made by devs for devs. So you need only to decide a good JSON (Dictionary) payload that meets the HEC Events endpoint formatting. We bother with the raw endpoint because the Dictionary object is a native shortcuts thing. You will need a valid HEC receiver setup which is beyond the scope of this post. The HEC receiver will have to be reachable from the Internet such as SplunkCloud. You will need to have a valid HEC token and know the index. Here we just use main. You will have to look at the attached screen shots. I am not typing out every tap and step here. Shortcuts are visually self explanatory. IOS Shortcuts: Shortcuts have more power on iOS vs on HomeKit. So first we will cover the easy way on iOS. First you will want to make a new shortcut to act at your HEC Sender. This is so you can set it up once but run it from other shortcuts that have a well formed JSON event to send. Think python class/code reuse.   We receive text from input to the shortcut. This is what we receive when this shortcut is called by “Run Shortcut”  We store that in a variable “Hec Payload” We next store the Full URL to the Hec Events endpoint and the Hec Token in variables The finally trick is doing the POST action of the payload to the HEC receiver using the “Get contents of HTTP” Action. Note in the attached screen shot we change the action to post, set the header and use type of File for the JSON payload. Next let’s setup a shortcut that sends the data we want.  Here we make one to get the device name, other device information and log the battery level at the time. The key is making the Dictionary object for the HEC event payload.  Here is a drill down of that section. Last we automate the running of the data shortcut whenever we plug our device into power. To show it works like a champ: HomeKit: Now let’s say you want to log an event from a light coming on. HomeKit can execute some limited shortcut actions. These get executed on whatever your HomeKit hub turns out to be hence the limitation Such as an AppleTV 4K or HomePod. The limitation for us is there is no Run Shortcut action.  This means you have to make the JSON payload (dictionary) object and the HTTP action together in each automation. No easy setup the HEC send and call it as needed In this example we simply log when my mantle hue bulb comes on. This could be anything HomeKit can trigger off of such as a button press, motion, temperature etc. I won’t expand it all as they work the same way as our previous example. This just shows you have to build the payload and post action inside each HomeKit automation action. What is next? Well you can automate HEC post of any data that an iOS or HomeKit shortcut can see. Use your imagination for data that is of value to you.  
I am working with ES and the DVC_city filed is not populating which is derived from a lookup table file. We have: checked the file, ensured the .csv format is correct etc, removed the fields for th... See more...
I am working with ES and the DVC_city filed is not populating which is derived from a lookup table file. We have: checked the file, ensured the .csv format is correct etc, removed the fields for that particular data set and readded.  We added the data via the Lookup_editor. Upon troubleshooting, we received errors when we ran the following search: index=_internal (sourcetype=lookup_editor_rest_handler OR sourcetype=lookup_backups_rest_handler) INFO OR WARNING OR ERROR OR CRITICAL | rex field=_raw "(?<severity>(DEBUG)|(ERROR)|(WARNING)|(INFO)|(CRITICAL)) (?<message>.*)" | fillnull severity value="UNDEFINED" | search severity=ERROR ERROR Unable to force replication of the lookup file, user= <user's_name> , namespace=SplunkEnterpriseSecuritySuite, lookup_file=lookup_file.csv Traceback (most recent call last): File "/opt/splunk/etc/apps/lookup_editor/bin/lookup_editor/__init__.py", line 415, in update self.force_lookup_replication(namespace, lookup_file, session_key) File "/opt/splunk/etc/apps/lookup_editor/bin/lookup_editor/__init__.py", line 292, in force_lookup_replication if 'No local ConfRepo registered' in content: TypeError: a bytes-like object is required, not 'str'   Please note the following: 1. We periodically add data to this lookup file and this is the first time recieving this error  2. We are on the Splunk Cloud Platform 3. As a result, we are not recieving any enrichments for any new data added to that particular lookup file. Previous data is populating as normal with the dvc fields as expected.  4. Asset lookup was added in ES and the new lookup data is shown in exported file 5. Inputlookup search is generating the new data added with the "city" field which maps to dvc_city 6. The global setting is configured for the correct city/ip mapping in ES   Let me know if any other information is required.  
I am on splunk cloud and have been using this functionality which is pretty useful to determine what timezone our users are in. It just seems to have stopped since last Tuesday we just got our enviro... See more...
I am on splunk cloud and have been using this functionality which is pretty useful to determine what timezone our users are in. It just seems to have stopped since last Tuesday we just got our environment upgraded to Version:8.2.2203.4 it is returning the fields for timezone and metro but no data  Any ideas ? (where x.x.x.x = ip address) | makeresults 1 | eval src_ip = "x.x.x.x" | iplocation src_ip allfields=true | transpose gives column row 1 City Houston Continent North America Country United States MetroCode   Region Texas Timezone   _time 1663100176 lat 29.7604 lon -95.3698 src_ip x.x.x.x I've raised a case but interested if anyone else has experienced this
Hello, We have Splunk in my new company and I am trying to understand Splunk and the environment. So, they have firewall logs (from one product) in 3 different indexes, one for traffic, one for thr... See more...
Hello, We have Splunk in my new company and I am trying to understand Splunk and the environment. So, they have firewall logs (from one product) in 3 different indexes, one for traffic, one for threats and for other firewall logs. Is this normal? Seems a bit inefficient especially with regards to organization of logs and when searching. They also have combined 2 different firewall products into one of the indexes. I thought each product should have its own index? The person who did the deployment said that this was done for efficiency but this somehow seems to be counterproductive. Am I missing something when I have to search 3 different indexes to get complete results for a certain IP? Any advise is appreciated, Thank you, CM
Hello everyone,   I was curious if someone could help me finding an app for splunk that will provide syslogs of my cisco network gear? (IE if someone changes a vlan or shuts a port) I can look up... See more...
Hello everyone,   I was curious if someone could help me finding an app for splunk that will provide syslogs of my cisco network gear? (IE if someone changes a vlan or shuts a port) I can look up their user name or switch name, and it will provide me the time stamp, command that was ran and who did it   I used this a a prior employer and want to get this implements where I am employed now. This was good when there were outages and no one spoke up, accountability and training. 
Hello team !!  Im working whit CDR of SMS and I have to find a way to visualize that two fields are repeated more than 10 times in a minute Could you help me find a way to do it? This is a part of... See more...
Hello team !!  Im working whit CDR of SMS and I have to find a way to visualize that two fields are repeated more than 10 times in a minute Could you help me find a way to do it? This is a part of my CDR  14:00:06.495844|2022-09-13 14:00:06.495847|2022-09-13 14:00:06|MT|3385251555|56271948588 origin:3385251555 dest:56271948588 I want to see when it repeats the same origin and the same destination more than 10 times in 1 minute Thank you very much for your help and time  
Has anyone setup CIM parsing for the Akamai SIEM TA? I am assuming these events should be going to the Alerts data model, but there is almost no parsing in the app.  https://splunkbase.splunk.com/a... See more...
Has anyone setup CIM parsing for the Akamai SIEM TA? I am assuming these events should be going to the Alerts data model, but there is almost no parsing in the app.  https://splunkbase.splunk.com/app/4310/
I'm looking to setup a multisite indexer cluster. And due to GDPR, I'd like to have a non-replicated index on one of the sites. Is this possible?
Hey all, So I found a question here about using multiple inputs.conf files.. how it's possible with multiple apps but not just one. My question is, would this work if you copied the single app you ha... See more...
Hey all, So I found a question here about using multiple inputs.conf files.. how it's possible with multiple apps but not just one. My question is, would this work if you copied the single app you have to a new directory?  Say you have App 1, you copy it to App 2. Since the contents are the same.. as long as you did not try to monitor the same log files.. would this work and be considered two apps?
Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). How to solve this error?  Please help me.
Hi MC team,  One of our current requirements for a Security Incident Management solution is to be able to provide quick context around an asset.  One of the most time consuming tasks that an incide... See more...
Hi MC team,  One of our current requirements for a Security Incident Management solution is to be able to provide quick context around an asset.  One of the most time consuming tasks that an incident responder faces is to track down what the device being alerted on does, what its criticality is and who is the owner.  The most effective way to do this is to integrate with an Asset Management /CMDB solution.  Is this something that Mission Control can or is looking to do? Thank you kindly, Mike