All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In my case alert is not triggered when particular log is generated. So i checked that the person who created that alert previously has no permission for scheduler search when i verify from internal l... See more...
In my case alert is not triggered when particular log is generated. So i checked that the person who created that alert previously has no permission for scheduler search when i verify from internal logs and beacuse of this i am not able to see any view result for job runs. So please suggest if i will create new alert by with all scheduled search permission, so it will get resolve or not ? Means schedule search is directly proportional to alert triggered ?    
Hi there Does anyone have a way to manipulate the time in an alert template. It'showing the event time in UTC time, not in the customer's time We have SaaS Controllers which are in Europe however t... See more...
Hi there Does anyone have a way to manipulate the time in an alert template. It'showing the event time in UTC time, not in the customer's time We have SaaS Controllers which are in Europe however the customer is in a different timezone, however from AppD Support it seems we cannot update the time in the Alert. This obviously causes confusion when the customer sees the event time is out by a couple of hours  Surely there is a way to add/subtract hours in the template itself? Any help is appreciated:)
https://github.com/splunk/botsv3 https://www.splunk.com/en_us/blog/security/botsv3-dataset-released.html I'm starting to do it a little bit more. If anyone has an explanation or knows where to fin... See more...
https://github.com/splunk/botsv3 https://www.splunk.com/en_us/blog/security/botsv3-dataset-released.html I'm starting to do it a little bit more. If anyone has an explanation or knows where to find it, please let me know. I've only done about 10 questions. The aws-related is difficult. I don't know where the logs go, I don't know where they go.
Hi I am trying to import a specific account data from AWS S3  we have configured SQS to import the full data from the same S3  and it works properly  I have defined the inputs as below   the acco... See more...
Hi I am trying to import a specific account data from AWS S3  we have configured SQS to import the full data from the same S3  and it works properly  I have defined the inputs as below   the account path in AWS is Amazon S3/amdocsinfosectrail/AWSLogs/o-kgohve3tjc/001519100451 what I am missing  ?  the logs are not created with the key_name  once I remove the filter I see that the /opt/splunk/var/lib/splunk/modinputs/aws_s3/amdocsinfosectrail_001519100451.index.v3.ckpt is getting the list of files  what I am missing  ?  [aws_s3://amdocsinfosectrail_001519100451] aws_account = IS account bucket_name = amdocsinfosectrail character_set = auto ct_blacklist = ^$ host_name = s3.amazonaws.com index = test initial_scan_datetime = -180d interval = 30 is_secure = True max_items = 100000 max_retries = 3 recursion_depth = -1 sourcetype = aws:s3 disabled = 0 key_name = AWSLogs/o-kgohve3tjc/001519100451/*
Does anyone have any SPL for TP-Link HS110. I've setup the add-on for TP-link from Splunkbase and i'm currently ingesting the data. But I'd like to be able to graph the data as well. Has anyone had ... See more...
Does anyone have any SPL for TP-Link HS110. I've setup the add-on for TP-link from Splunkbase and i'm currently ingesting the data. But I'd like to be able to graph the data as well. Has anyone had any luck in graphing the data? Thanks.
Hi We have very big indexes (300 GB )  Also we have very limited  storage  is it recommended to split the index to smaller indexes (storage , performance )  ?  
Hi, I'm trying to compute the duration between two rows. I need the duration for Battery_duration  and Battery_duration2.   NodeBTime = Alarm ID 22214 Occurtime PowerTime = Alarm ID 25622 Occurti... See more...
Hi, I'm trying to compute the duration between two rows. I need the duration for Battery_duration  and Battery_duration2.   NodeBTime = Alarm ID 22214 Occurtime PowerTime = Alarm ID 25622 Occurtime CellTime = Alarm ID 29245Occurtime Battery_duration = NodeBTime - PowerTime Battery_duration2 = CellTime - NodeBTime  AlarmID Occurtime ClearTime duration NodeBTime CellTime PowerTime Battery_duration Battery_duration2 29245 3/07/2020 14:09 3/07/2020 14:13     3/07/2020 14:09       25622 3/07/2020 9:01 3/07/2020 14:11       3/07/2020 9:01     22214 3/07/2020 13:59 3/07/2020 14:11   3/07/2020 13:59           Here is my query:   |fillnull ClearTime |eval ClearTime=if(ClearTime=0,strftime(now(),"%Y-%m-%d %H:%M:%S"),ClearTime) |eval dur_sec=strptime(ClearTime,"%Y-%m-%d %H:%M:%S.%N")-strptime(Occurtime,"%Y-%m-%d %H:%M:%S.%N") |eval dur_sec=round((strptime(ClearTime,"%Y-%m-%d %H:%M:%S.%N")-strptime(Occurtime,"%Y-%m-%d %H:%M:%S.%N"))) |eval duration=tostring(dur_sec,"duration") |convert num(duration) |eval duration=round(duration/60,2) | eval PowerTime=if((AlarmID="25622"),Occurtime,null) | eval NodeBTime=if((AlarmID="22214"),Occurtime,null) | eval CellTime=if((AlarmID="29245"),Occurtime,null) | eval Battery_duration=strptime(NodeBTime,"%Y-%m-%d %H:%M:%S.%N")-strptime(PowerTime,"%Y-%m-%d %H:%M:%S.%N") | eval Battery_duration=round((strptime(NodeBTime,"%Y-%m-%d %H:%M:%S.%N")-strptime(PowerTime,"%Y-%m-%d %H:%M:%S.%N"))) | table AlarmID Occurtime ClearTime duration NodeBTime, CellTime PowerTime Battery_duration Battery_duration2 State Here is my result: It doesnt give me any result for Battery_duration and Battery_duration2. What is missing? Thanks,
I have two syslog servers syslog1 and syslog2 For all of the sources i am getting the data into both the syslog servers but indexing data from 1 syslog. But for one of the sources i a receiving dat... See more...
I have two syslog servers syslog1 and syslog2 For all of the sources i am getting the data into both the syslog servers but indexing data from 1 syslog. But for one of the sources i a receiving data only on one syslog server that is syslog1 and not on syslog2. But everything else right now is getting forwarder from syslog2.  Now i dont know how and where to start trouble shooting from  Please help. 
Hi there, I want to group the filter into Full Outage or Partial Outage. filter  impact 3G Outage Full Outage Cell Blocked Power Outage Power Outage Partial Outage Cell ... See more...
Hi there, I want to group the filter into Full Outage or Partial Outage. filter  impact 3G Outage Full Outage Cell Blocked Power Outage Power Outage Partial Outage Cell Blocked   Here is my query: | eval impact=case( searchmatch("Cell Blocked"),"Partial Outage", searchmatch("3G Outage"),"Full Outage",1=1,"No service impact")   Result:     The correct impact should be Full Outage. Can anyone help me out?   Thanks,
Has anyone had luck gathering and ingesting SQL Azure Audit logs from a blob storage?   I've seen articles on azure AD, and on NON-Azure SQL from a drive letter, but looking for something specific on... See more...
Has anyone had luck gathering and ingesting SQL Azure Audit logs from a blob storage?   I've seen articles on azure AD, and on NON-Azure SQL from a drive letter, but looking for something specific on loading up those pesky eml files from blob (or should i be doing something else to get the data into splunk?).
Hi All, I want to embed a news feed video in my dashboard, but the examples I've tried to copy that have an embedded YouTube video don't work. There is no video showing https://splunkonbigdata.com/... See more...
Hi All, I want to embed a news feed video in my dashboard, but the examples I've tried to copy that have an embedded YouTube video don't work. There is no video showing https://splunkonbigdata.com/2018/09/22/embedding-google-search-engine-in-splunk-dashboard/ Thanks in advance, Russell
How can i update my current universal forwarder for splunk ? Please share me all the steps how can i upgrade my version, how to take backeup and all ? Please share for linux and windows both.
Hi,  A fresh install of splunk enterprise on my ubuntu: root@sekar:/opt/splunk/var/log/splunk# uname -a Linux sekar.splunk.com 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_6... See more...
Hi,  A fresh install of splunk enterprise on my ubuntu: root@sekar:/opt/splunk/var/log/splunk# uname -a Linux sekar.splunk.com 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux root@sekar:/opt/splunk/var/log/splunk#  install went fine, service is running fine as well.. root@sekar:/opt/splunk/bin# ./splunk status splunkd is running (PID: 5462). splunk helpers are running (PIDs: 5463 5477 5550 5564). root@sekar:/opt/splunk/bin#   port 8000 is used by splunkd only.. root@sekar:/opt/splunk/bin# lsof -i :8000 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME splunkd 5462 root 112u IPv4 57851 0t0 TCP *:8000 (LISTEN) root@sekar:/opt/splunk/bin#     still, the splunk web page is not loading up.. it says:  This site can’t be reached sekar.splunk.com took too long to respond. Try: Checking the connection Checking the proxy and the firewall ERR_CONNECTION_TIMED_OUT   please suggest. thanks. 
If I create a free account and download Universal Forwarder, is there no limit on the number of days? If so, do I need a Community, Standard, or Premium contract to get support? Will the agreement ap... See more...
If I create a free account and download Universal Forwarder, is there no limit on the number of days? If so, do I need a Community, Standard, or Premium contract to get support? Will the agreement apply to other accounts?
Hello, I have tried the following command to forecast recipient using predict command and Forecast time series assistant. sourcetype="mysource"|timechart span=60min values(recipient{}) as recipient... See more...
Hello, I have tried the following command to forecast recipient using predict command and Forecast time series assistant. sourcetype="mysource"|timechart span=60min values(recipient{}) as recipient values(headerFrom) as headerFrom count(recipient{}) by span | predict "recipient: NULL" as prediction algorithm=LLP holdback=0 future_timespan=5 upper95=upper95 lower95=lower95 | `forecastviz(5, 0, "recipient: NULL", 95)` I gave recipient:NULL for predict because the column I get as a result of timechart is as follows, _time      count(recipient{}): NULL       headerFrom: NULL           recipient: NULL I tried renaming the recipient field of predict command as follows, sourcetype="mysource"|timechart span=60min values(recipient{}) as recipient values(headerFrom) as headerFrom count(recipient{}) by span | predict "recipient" as prediction algorithm=LLP holdback=0 future_timespan=5 upper95=upper95 lower95=lower95 | `forecastviz(5, 0, "recipient: NULL", 95)` But then I am getting the error as "command="predict", Unknown field: recipient" Please suggest
Hi , My timestamp in data looks like: 2020-07-02T18:00:18+02:00 with name log_modified_date which i want to be extracted i have written below props.conf: [_json] INDEXED_EXTRACTIONS = json KV_MO... See more...
Hi , My timestamp in data looks like: 2020-07-02T18:00:18+02:00 with name log_modified_date which i want to be extracted i have written below props.conf: [_json] INDEXED_EXTRACTIONS = json KV_MODE = none AUTO_KV_JSON = false NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = last_modified_date TIME_FORMAT = %Y-%m-%dT%H:%M:%S+%2N:%2N MAX_TIMESTAMP_LOOKAHEAD = 25 and getting time extracted as :  7/2/20 6:00:18.020 PM   but I want the time field extracted in same way as in data with + value as well like:   7/2/20 6:00:18+02:00  Please let me know what i am doing wrong as i am not getting expected output with + value. Note: this +02:00 value is fixed with every timestamp in data . Here’s my sample log data: {"_timestamp":"2020-07-02 18:00:46","_ver":"2","asset_name":"","assigned_group":"Troubleshooting - Tier 2","assignee":"Buhle Mahlaba","ci":"","cause":"","city":"","client_type":"","closed_date":"","closure_source":"","company":"MTN BUSINESS","contact_phone":"","contact_site":"","country":"","created_from_template":"","customer_phone":"###","customer_site":"INTERNET SOLUTIONS(PTY) LTD","debtor_code":"MTN000","direct_contact_city":"","direct_contact_company":"","direct_contact_corporate_id":"","direct_contact_country":"","direct_contact_country_code":"","direct_contact_department":"","direct_contact_desk_location":"","direct_contact_extension":"","direct_contact_first_name":"","direct_contact_internet_email":"","direct_contact_last_name":"","direct_contact_local_number":"","direct_contact_location_details":"","direct_contact_middle_initial":"","direct_contact_organization":"","direct_contact_region":"","direct_contact_site_group":"","direct_contact_state_province":"","direct_contact_street":"","direct_contact_time_zone":"","direct_contact_zip_postal_code":"","first_name":"Melvern","impact":"2-Significant\/Large","incident_id":"MTNB00001289400","incident_type":"User Service Restoration","last_acknowledged_date":"","last_modified_by":"412877","last_modified_date":"2020-07-02T18:00:44+02:00","last_name":"Banoo","last_resolved_date":"","middle_name":"","notes":"HI Team\n\nThe mentioned link is down ,Please investigate and advise.\n\n\nRP\/0\/RSP0\/CPU0:mi-za-bry-mspe4#sho log | inc BVI906\nRP\/0\/RSP0\/CPU0:Jul  2 14:43:49.894 SAST: mpls_ldp[1204]: %ROUTING-LDP-5-HELLO_ADJ_CHANGE : VRF 'default' (0x60000000), Link hello adja...","operational_categorization_tier_1":"TES_Link","operational_categorization_tier_2":"Microwave PTP","operational_categorization_tier_3":"Link Down","owner_group":"General Support","priority":"Critical","product_categorization_tier_1":"TES_Managed Networks","product_categorization_tier_2":"Access Service","product_categorization_tier_3":"Cloud Connect","product_name":"","region":"","reported_date":"2020-07-02T16:36:04+02:00","reported_source":"Email","resolution":"","resolution_categorization_tier_1":"","resolution_categorization_tier_2":"","resolution_categorization_tier_3":"","resolution_product_categorization_tier_1":"","resolution_product_categorization_tier_2":"","resolution_product_categorization_tier_3":"","responded_date":"2020-07-02T18:00:43+02:00","slm_real_time_status":"Within the Service Target","satisfaction_rating":"","service_manager":"","service_request_id":"","site_group":"","state_province":"","status":"In Progress","status_reason_hidden":"","street":"","submit_date":"2020-07-02T16:36:04+02:00","submitter":"AR_ESCALATOR","summary":"INC000147465| me-za-gp80-hoedspru-bry-1 | | E2379","time_zone":"","urgency":"1-Critical","vendor_group":"","vendor_name":"","vendor_ticket_number":"","zip_postal_code":""}
I am currently using Splunk Enterprise 8.0.3 and Phantom version 4.8.24304. All Phantom apps have been installed and are configured correctly. In Splunk Web, I have successfully configured the Phant... See more...
I am currently using Splunk Enterprise 8.0.3 and Phantom version 4.8.24304. All Phantom apps have been installed and are configured correctly. In Splunk Web, I have successfully configured the Phantom Server in the App, and applied the Splunk Enterprise instance IP under the "allowed ips" in Phantom. I have tried two ways of forwarding data into Phantom from Splunk; through the event forwarding of saved searches and through the HTTP Event Collector (HEC). Splunk Web and Phantom are on two different VM, I am not able to connect utilizing the HEC in Phantom under the Search Settings option, and the saved search for event forwarding never appears in Phantom. For the HEC, I used the following URL: hxxp://splunk_host:8088/services/collector/event .  Each time whether using http or https, the request 404's out. The saved search function allows me to choose to "Send to Phantom", but again I am not seeing any events in Phantom.  I have verified connectivity between the VM's and there are no issues there. The problem lies somewhere with my HEC and the saved searches for forwarding. My VM is listening on port 8088 for HEC. Any help would be greatly appreciated.
I'm primarily interested in the transaction and access logs. I also wanted to add that the OAG instances are located within the Oracle Cloud (OCI). Splunk is also set up on OCI instances.
Initially, I was just planning to install the Palo Alto Network Add-on for Splunk on an HF, and get the traffic and threat logs sent to Splunk, but there also appears to be a lot of documentation for... See more...
Initially, I was just planning to install the Palo Alto Network Add-on for Splunk on an HF, and get the traffic and threat logs sent to Splunk, but there also appears to be a lot of documentation for using a syslog server + UF to facilitate the flow of Palo Alto logs to Splunk.   What is the recommended approach to send Palo Alto logs to Splunk? I'm mainly interested in just getting firewall (pan:traffic) and IDS/IPS (pan:threat) logs.
Hello, I have an odd problem with db_connect : My connection is ok I can see the database and the tables but when i try to query (basic select) i have an error Error in 'dbxquery' command: Exte... See more...
Hello, I have an odd problem with db_connect : My connection is ok I can see the database and the tables but when i try to query (basic select) i have an error Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. The search job has failed due to an error. You may be able view the job in the Job Inspector. My query is | dbxquery query="select * from aps.notifications " connection="PE" even select 1 from dual doesn't work Database : Oracle User : root privileges Splunk 7.0.0 DB connect 3.1.4 App Build 43