All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I am trying to connect the SEP api via python and my code is as follows -  # encoding = utf-8 import os import sys import time import datetime import json import requests import bas... See more...
Hi  I am trying to connect the SEP api via python and my code is as follows -  # encoding = utf-8 import os import sys import time import datetime import json import requests import base64 ''' IMPORTANT Edit only the validate_input and collect_events functions. Do not edit any other part in this file. This file is generated only once when creating the modular input. ''' ''' # For advanced users, if you want to create single instance mod input, uncomment this method. def use_single_instance_mode(): return True ''' def validate_input(helper, definition): """Implement your own validation logic to validate the input stanza configurations""" # This example accesses the modular input variable # text = definition.parameters.get('text', None) # text_1 = definition.parameters.get('text_1', None) pass def collect_events(helper, ew):  opt_clientid = helper.get_arg('clientid')  opt_clientsecret = helper.get_arg('clientsecret')  opt_customerid = helper.get_arg('customerid')  opt_domainid = helper.get_arg('domainid')  opt_apihost = helper.get_arg('apihost')    tokenUrl = "https://" + opt_apihost + "/v1/oauth2/tokens"  post = [] files = [] s = requests.Session() e = (opt_clientid + ':' + opt_clientsecret) en = e.encode('utf-8') en64 = base64.urlsafe_b64encode(en) s.headers.update({ 'Accept': 'application/json' }) s.headers.update({ 'Authorization': 'Basic ' + str(en64.decode()) }) s.headers.update({ 'Content-Type': 'application/x-www-form-urlencoded' }) s.headers.update({ 'Host': opt_apihost }) f = s.post(tokenUrl, data=post, files=files, verify=False) r = json.loads(f.text) access_token = r['access_token'] url = "https://" + opt_apihost + "/v1/devices" parameters = {"authorization":access_token} final_result = [] # The following examples send rest requests to some endpoint. response = helper.send_http_request(url, 'GET', parameters=None, payload=None,headers=parameters, cookies=None, verify=True, cert=None,timeout=None, use_proxy=True) # get the response headers #r_headers = response.headers # get the response body as text #r_text = response.text # get response body as json. If the body text is not a json string, raise a ValueError r_json = response.json() for devices in r_json["devices"]: state = helper.get_check_point(str(devices["id"])) if state is None: final_result.append(devices) helper.save_check_point(str(devices["id"]), "Indexed") event = helper.new_event(json.dumps(final_result), time=None, host=None, index=None, source=None, sourcetype=None, done=True, unbroken=False) ew.write_event(event) The test code works fine, but 1. The events are being indexed, each value is not appearing as a seperate entity but is rather grouped  2. The data is not being updated in the interval mentioned in the beginning   
We want to integrate IBM xforce's free open source threat feed with splunk. How can I achieve this. I have IBMs api id and key. Any steps or documentations will be appreciated.
Hello community Trying to figure out what is blocking/affecting UF on Windows Agent was installed using CLI msiexec.exe /i splunkforwarder-<version>-x64-release.msi DEPLOYMENT_SERVER="<ip>:8089... See more...
Hello community Trying to figure out what is blocking/affecting UF on Windows Agent was installed using CLI msiexec.exe /i splunkforwarder-<version>-x64-release.msi DEPLOYMENT_SERVER="<ip>:8089" SPLUNKPASSWORD=<password> AGREETOLICENSE=yes /quiet Agent is installed, connects to deployment server and fetches apps/configuration. Looking at the log, it seems that the configuration is read properly as I see configuration in there, blacklist/whitelist and other things. Setup is UF -> HF -> IX, IX cannot be reached directly. Everything looks good but here’s where the issues start. Trying to execute Splunk commands including runtime does not work wile the service/agent is running. I get a blinking prompt and nothing happens. Shutting down the service/agent I can run commands, though then runtime commands do not work, and I can’t diagnose. This is one thing which seems off. Then, the system can reach both DS and HF, traffic is allowed. However, watching the traffic from/to the system I see regular traffic with the DS though nothing against the HF. Not even attempts to establish connections. This does not seem reasonable either, I would expect at least failed connections. We suspect that this is caused by managed configuration of the computer itself in some manner. However, any suggestions regarding possible ways to try to diagnose/solve this would be much appreciated.
I am calculating percentage for each https status code. But i also would like to display the total number of requests in results. Below query displays only percentage of each status code. Is there a ... See more...
I am calculating percentage for each https status code. But i also would like to display the total number of requests in results. Below query displays only percentage of each status code. Is there a way i can add _Total in results? Query: index=app_ops_prod host="sl55caehepc20*" OR host="sl55sdsapp001" OR host="sl55sdsapp002" source="/var/mware/logs/SDS*/*localhost*" method="POST" | timechart span=1d count by status|addtotals row=true fieldname=_Total|foreach * [eval <<FIELD>> = '<<FIELD>>' * 100 / _Total]
Hi All,   Does Splunk Security Essentials app also map our custom (user defined) correlation searches to different MITRE tactics ?  I know it does so for the pre defined ones but what about for user ... See more...
Hi All,   Does Splunk Security Essentials app also map our custom (user defined) correlation searches to different MITRE tactics ?  I know it does so for the pre defined ones but what about for user defined searches? Thanks
I'm developing a new app with react js and SplunkJS SDK. In my app, I want to call a custom endpoint service that locates in the same app with a URL       https://x.x.x.x:8089/servicesNS/-/... See more...
I'm developing a new app with react js and SplunkJS SDK. In my app, I want to call a custom endpoint service that locates in the same app with a URL       https://x.x.x.x:8089/servicesNS/-/<my-app>/run       In my call, I'm trying to call a custom endpoint:       handleSplunkConnect(event) { let http = new splunk_js_sdk.SplunkWebHttp(); return new splunk_js_sdk.Service( http, application_name_space, ); } makeCloudFlareAction(event){ let service = this.handleSplunkConnect() //make splunk service connect let params = { "customer": "JAUC-011", } service.post("/servicesNS/-/<my-app>/run", params, function(err, resp) { console.log(resp) console.log(err) });       But I see the URL request is `https://x.x.x.x:8000/en-US/splunkd/__raw/servicesNS/-/<my-app>/run?output_mode=json` instead of `https://x.x.x.x:8089/servicesNS/-/<my-app>/run?output_mode=json` My question is how can I call a custom endpoint service with Splunk:8089  by using SplunkJS SDK? OR, Can I call a custom endpoint service with port 8080 (web) instead of 8089 (splunkd) ?
Hi All, I have logs in splunk like below (this is one log): { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" }, "name": "source.mq.apac.tw.ebs.ft.ft.raw.i... See more...
Hi All, I have logs in splunk like below (this is one log): { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" }, "name": "source.mq.apac.tw.ebs.ft.ft.raw.int.rawevent", "tasks": [ { "id": 0, "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" } ], "type": "source" } { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" }, "name": "source.mq.apac.tw.cards.ecms.ecms.raw.int.rawevent", "tasks": [ { "id": 0, "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" } ], "type": "source" } { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" }, "name": "sink.mq.apac.tw.cards.ecms.ecms.derived.int.sinkevents", "tasks": [ { "id": 0, "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" }, { "id": 1, "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" } ], "type": "sink" } I have created below query to extract the fields and create a table of those values: ..... | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"(?P<Connector_State>[^\"]+)\"" | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"\w+\"\,\s+\"\w+\"\:\s\"(?P<Worker_ID>[^\:]+)" | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"\w+\"\,\s+\"\w+\"\:\s\"[^\:]+\:(?P<Port>\d+)\"" | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"\w+\"\,\s+\"\w+\"\:\s\"[^\"]+\"\s+\}\,\s+\"name\"\:\s\"(?P<Connector_Name>[^\"]+)\"" | search Connector_State=RUNNING | table Connector_Name,Worker_ID,Port It gives me the table in below format: Connector_Name Worker_ID Port source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net gtgcb-csrla01s.nam.nsroot.net gtgcb-csrla02s.nam.nsroot.net 8087 8087 8087 sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net 8087 source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla02s.nam.nsroot.net gtgcb-csrla01s.nam.nsroot.net 8087 8087 But the requirement is to get the table as below: Connector_Name Worker_ID Port source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla02s.nam.nsroot.net 8087 sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla01s.nam.nsroot.net 8087 sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net 8087 sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net 8087 source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla02s.nam.nsroot.net 8087 sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla01s.nam.nsroot.net 8087 Please help to modify the query to get the output in the desired manner.
Hi, may I know where can I get the latest ITSI Administration Manual documentation/pdf? Please assist.
I'm trying to extract and dashboard the latest number in my logs for each "7d" stat. Some sample logs: [db]: 00:05:01.000: 3ddesigns:total 173304125 [db]: 00:05:01.000: 3ddesigns:1d 253113 [d... See more...
I'm trying to extract and dashboard the latest number in my logs for each "7d" stat. Some sample logs: [db]: 00:05:01.000: 3ddesigns:total 173304125 [db]: 00:05:01.000: 3ddesigns:1d 253113 [db]: 00:05:01.000: 3ddesigns:7d 1435675 [db]: 00:05:01.000: 3ddesigns:30d 5863610 [db]: 00:05:01.000: 3dlessons:total 92148058 [db]: 00:05:01.000: 3dlessons:1d 103077 [db]: 00:05:01.000: 3dlessons:7d 539695 [db]: 00:05:01.000: 3dlessons:30d 2216809 [db]: 00:05:01.000: circuitsdesigns:total 62150103 [db]: 00:05:01.000: circuitsdesigns:1d 125770 [db]: 00:05:01.000: circuitsdesigns:7d 724227 [db]: 00:05:01.000: circuitsdesigns:30d 2936667 I have a search query but it gives me a Null field...is there a way to rename the fields?: obs_mnkr="tnkrcad-p-ue1" source="/disk/logtxt/stats.log" | multikv noheader=t | fields _raw | rex "3ddesigns:(?<designs>\w+)\s+(?<num>\d+)" | regex designs!="1d" | regex designs!="30d" | regex designs!="total" | rex "circuitsdesigns:(?<circuits>\w+)\s+(?<num>\d+)" | regex circuits!="1d" | regex circuits!="30d" | regex circuits!="total" | timechart span=1w last(num) by designs  
Hello, I have 3 fields from which I need to build a line chart on a Time series.   ServerTime Endpoint ResponseTime   I need to show  endpoint response time over a 95 percentile on serv... See more...
Hello, I have 3 fields from which I need to build a line chart on a Time series.   ServerTime Endpoint ResponseTime   I need to show  endpoint response time over a 95 percentile on servertime. So the servertime will be on the Y-Axis, the time series on the X-Axis and a legend that shows the endpoints. Can you please suggest a query that would achieve this.   Thank you  
Hello dears, I deleted my custom field which I created before but still extract in search results. Also, I'm trying a new field extract ( sampling is fine ) but it doesn't show in search ( verbose ... See more...
Hello dears, I deleted my custom field which I created before but still extract in search results. Also, I'm trying a new field extract ( sampling is fine ) but it doesn't show in search ( verbose mode ).  Do you have any idea, why? Regards.
While configuring an S3 input in the Splunk Add-on for AWS, I received an error message stating that "SSL Validation failed" because the VPC S3 Endpoint did not match a series of S3 bucket endpoint n... See more...
While configuring an S3 input in the Splunk Add-on for AWS, I received an error message stating that "SSL Validation failed" because the VPC S3 Endpoint did not match a series of S3 bucket endpoint names (e.g. s3.us-east-1.amazonaws.com). As part of the Splunk AWS Add-on naming convention for private endpoints, the Private Endpoint URL for the S3 bucket must be https://vpce-<endpoint_id>-<unique_id>.s3.<region>.vpce.amazonaws.com After creating the endpoints, we're running into the SSL Validation errors. Any idea what could be causing this?
I have a query that frequently times out due to the subsearch time limit. I'd like to improve it's performance but I'm not sure how. Here's my query: host=prod* source=user-activity.log sourcetype=... See more...
I have a query that frequently times out due to the subsearch time limit. I'd like to improve it's performance but I'm not sure how. Here's my query: host=prod* source=user-activity.log sourcetype=log4j ID=uniqueID MESSAGE="LOGIN_SUCCESS*"| stats count as Logins by Full_Date, ID, DName, STATE | join type=left ID [ search host=prod* source=server.log  sourcetype=log4j MESSAGE="[Dashboard User-Facing*" ID=uniqueID | stats count as Errors by Full_Date ,ID, DName, STATE ]|eval %=round((100*Errors)/Logins,0) |table ID, DName, Full_Date, STATE, Errors, Logins,%   Any help would be greatly appreciated.
the below search is for an alert, it is supposed to list all missing / non reporting agents. when I run it it lists all hosts ? can anyone help fix the search please below. greatly appreciated.   ... See more...
the below search is for an alert, it is supposed to list all missing / non reporting agents. when I run it it lists all hosts ? can anyone help fix the search please below. greatly appreciated.   index=indexname sourcetype="sourcetypename" | bin _time span=4d | eval days_since = floor((now()-lastSeen)/86400) | stats latest(lastSeen) as lastSeen , values(days_since) as days_since by host | search days_since>4 | eval lastSeen=strftime(lastSeen, "%Y-%m-%d %H:%M:%S")
I don't know what the best way to word the subject, so if anyone has a better recommendation after reading my question below let me know what would be a better way to word the subject. I have acces... See more...
I don't know what the best way to word the subject, so if anyone has a better recommendation after reading my question below let me know what would be a better way to word the subject. I have access to several different security dashboards from the InfoSec app. I am trying to figure out how to pivot from the summarized data shown in the dashboard which uses tstats to the traditional Search & Reporting app where I can view events and click on items of interest to narrow down the search. One example is seeing an alert on a dashboard and when I open the search tstats only shows the source IP addresses. I'd like to see more information, such as a destination and other fields but tstats isn't designed to show information when you click on "events". This why I'd like to take the information from tstats and open up the Search and Reporting app so  I can scroll through the list of fields on the left and use that to help refine results.
I have an indexing cluster and searchhead cluster.  I want to use a csv threat feeds to add IP reputation field using automatic lookup  I tried using all the online resources but It doesnt work  ... See more...
I have an indexing cluster and searchhead cluster.  I want to use a csv threat feeds to add IP reputation field using automatic lookup  I tried using all the online resources but It doesnt work    anyone knows a limitation for doing the automatic lookup with SearchHead clustering  I used the web based and the config files based option but didnt work  I did the manual checks and all worked 
I have what is hopefully a really straightforward issue.   Essentially I want to take the output (data within a specific field from sourcetypeA) from one search and use that data to search again with... See more...
I have what is hopefully a really straightforward issue.   Essentially I want to take the output (data within a specific field from sourcetypeA) from one search and use that data to search again within the same index but a different sourcetype (sourcetypeB).   Initial Search   "index"="data_index" "sourcetype"="sourcetypeA" "field1"="static_value" | table "field2" | dedup "field2"   The above search result is a single field that contains a single value per row, but ultimately more than 1 row and different values each row...something like below.....   field2 AAAAAAAAA BBBBBBBBB CCCCCCCCC    I then need to take each of the rows above and plug that into another search:   "index"="data_index" "sourcetype"="sourcetypeB" | table "field3", | table "field4" | dedup "field3"  
Hello Splunkers , I am trying to see if I can merge the following events and show in a tabular format sample event 1: 3/31/22 6:54:29.000 AM   GB (ID 5): BSN: 15730946, BON: 699-... See more...
Hello Splunkers , I am trying to see if I can merge the following events and show in a tabular format sample event 1: 3/31/22 6:54:29.000 AM   GB (ID 5): BSN: 15730946, BON: 699-01, BOAA: 01, GPN: 1395, GSN: 920-000   Sample event 2: 3/31/22 6:54:29.000 AM   CPU (ID 0): BSN: 55506204BC, BON: 555.06901.0004, BOAA: 01, QPN: 16646, QSN: 001   Sample event 3: 3/31/22 6:54:29.000 AM   CHASN: 166066   I want to merge all events which are coming from same host and same time  and show in a tabular format. if there is no value for a particular field it should show UNKNOWN   time                           host     CHASN       GPN                    GSN                                  QPN                                                QSN 3/31/22 6:54:29.000 AM  ABC      166066     1395             920-000                          16646                                           001  
Hi all, Can somebody recommend some sources from where I could learn about writing and implementing Telecom-Security use cases for Splunk? I'd appreciate any suggestions and recommendations. Ch... See more...
Hi all, Can somebody recommend some sources from where I could learn about writing and implementing Telecom-Security use cases for Splunk? I'd appreciate any suggestions and recommendations. Cheers
Hi I can't access the recent data in a metric index anymore with mstat command, but i can see it with mpreview commands. This means that data is there, but mstats just won't work on it anymore? I a... See more...
Hi I can't access the recent data in a metric index anymore with mstat command, but i can see it with mpreview commands. This means that data is there, but mstats just won't work on it anymore? I am on a standalone Splunk install we are using for test. I have a test that has run 4 hours ago, with mpreview you can see the data. This install was working fine until I upgraded an app. However, the APP does not have the INDEX in it that i needed. Using Analytics- I can see data from last Friday. But not today. Is there a way to check if the index is broken, or what are the next actions