All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Fellow Splunk Admins, Not sure if this is the right place to ask this, if it is not please direct me to the right place. If it is your help is appreciated. Is anyone aware if Splunk Enterpr... See more...
Hello Fellow Splunk Admins, Not sure if this is the right place to ask this, if it is not please direct me to the right place. If it is your help is appreciated. Is anyone aware if Splunk Enterprise is affected by the recent Spring Boot Vulnerability? Do we need to be careful for any specific App which can have Spring Framework dependency? And, if we do then is there a way to detect such vulnerability in our environment?   Thanks & Regards, Arijit
Hello, The reason for my question is that I cannot install the database agent, when I run the following command I get no response: § nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=... See more...
Hello, The reason for my question is that I cannot install the database agent, when I run the following command I get no response: § nohup java -Dappdynamics.agent.maxMetrics=300000 -Ddbagent.name=DBMon-Lab-Agent -jar db-agent.jar & attached is screen shot. Thanks for the help!
hi All, Has anyone heard about any advisory from splunk on Spring4Shell vulnerability? regards, Kulwinder @isoutamo @PickleRick 
Hi Teams, I am newbie to splunk, I have log message like this: 4/5/22 6:03:22.697 PM   2022-04-05T10:03:22.697Z 802cf235-b8d6-454e-bb1a-25d16f6b5f21 INFO 802cf235-b8d6-454e-bb1a-25... See more...
Hi Teams, I am newbie to splunk, I have log message like this: 4/5/22 6:03:22.697 PM   2022-04-05T10:03:22.697Z 802cf235-b8d6-454e-bb1a-25d16f6b5f21 INFO 802cf235-b8d6-454e-bb1a-25d16f6b5f21 INFO: Insert batch 0/6 END RequestId: 802cf235-b8d6-454e-bb1a-25d16f6b5f21 REPORT RequestId: 802cf235-b8d6-454e-bb1a-25d16f6b5f21 Duration: 601.44 ms Billed Duration: 602 ms Memory Size: 1024 MB Max Memory Used: 97 MB   I want to get Max Memory Used value in each message and create time chart to show Max Memory Used value and the Max Memory Used average value. Can anyone help me in this!
hi, want to make an alert in Splunk, for example: if _raw>10 make alert. what is the easiest way to make alert? can I do it within the search comment? play wav file? play through the br... See more...
hi, want to make an alert in Splunk, for example: if _raw>10 make alert. what is the easiest way to make alert? can I do it within the search comment? play wav file? play through the browser? python script?  
Hi, I have an index in wich I collect a lot of data, approximately 40 Gb/day. In the indexes.conf, I guess I've made a mistake and configured : maxDataSize = auto Now, it looks like I'm loosin... See more...
Hi, I have an index in wich I collect a lot of data, approximately 40 Gb/day. In the indexes.conf, I guess I've made a mistake and configured : maxDataSize = auto Now, it looks like I'm loosing data older than 3 month (roughly) and I guess it's due to this parameter. In the documentation (I should have read it before !), I can see for maxDataSize : "You should use "auto_high_volume" for high-volume indexes ... A "high volume index" would typically be considered one that gets over 10GB of data per day." 1/ Is it possible to change this parameter for an existing index ? Obviously, regarding the volume I want to ingest, the "auto_high_volume" is more appropriate (==> "maxDataSize = auto_high_volume" in the indexes.conf) 2/ Is there any other reason why I am losing data ? Thanks for your help ! David
Greetings, We would like to segregate a couple of our assets and forward their data onto other SIEM instances with our current full Splunk setup. Is it possible to send the same logs on the assets f... See more...
Greetings, We would like to segregate a couple of our assets and forward their data onto other SIEM instances with our current full Splunk setup. Is it possible to send the same logs on the assets from their respective UFs and HFs to send data to other SIEM solutions instead of Splunk Indexers? If possible, are there any articles and documentations that specify the detail on how the log is transferred and what steps need to be accomplished in order to achieve the final goal? Thanks, Best Regards,
Hi Community,   I am having a weird issue with Splunk Enterprise. I had set up a universal internal forwarder to execute a script that gives me the list of all different processes within the Linux ... See more...
Hi Community,   I am having a weird issue with Splunk Enterprise. I had set up a universal internal forwarder to execute a script that gives me the list of all different processes within the Linux environment. All of a sudden the script stopped producing results from 12 am and the panel didn't work. But again it starts working after 3 days by itself. This happened in both the test and production setup. Is there something that should be taken care of when using scripts in Universal forwarder or is there some reason for this unusual behaviour?   Regards, Pravin      
Hello, I recently upgraded the "Splunk Add-on for Microsoft Office 365" on my Splunk Heavy Forwarder to version 3.0.0, running on Splunk 8.1.4. I configured the "Cloud App Security" integration and... See more...
Hello, I recently upgraded the "Splunk Add-on for Microsoft Office 365" on my Splunk Heavy Forwarder to version 3.0.0, running on Splunk 8.1.4. I configured the "Cloud App Security" integration and the input for "Cloud Application Security Alerts". But, running the inputs, I think that this is bugged: the job is scheduled to download the alerts every 5 minutes. Every time the job runs, it downloads the alerts since the beginning (i. e.  from day 1 that the platform has been set up) and downloads just the first 100 alerts. Moreover, events are not indexed properly, since the timestamp that is applied is the timestamp at which the job runs, not the timestamp included in the event. Has everyone else experienced the same issue? Am I doing anything wrong?  Thanks in advance
Hello all, I have a lookup table which contains a list of URL we want to search in splunk, but instead of searching the specific URL in the list, we want to search when it contains a string in that ... See more...
Hello all, I have a lookup table which contains a list of URL we want to search in splunk, but instead of searching the specific URL in the list, we want to search when it contains a string in that URL. example: in the lookup we have a URL: blog.example.com I also want it to search and find URL which contains "blog.example.com". In the search it should be url=*blog.example.com, but how can I apply this search characteristic when using a lookup.   Thanks for any help.
Hi  I am trying to connect the SEP api via python and my code is as follows -  # encoding = utf-8 import os import sys import time import datetime import json import requests import bas... See more...
Hi  I am trying to connect the SEP api via python and my code is as follows -  # encoding = utf-8 import os import sys import time import datetime import json import requests import base64 ''' IMPORTANT Edit only the validate_input and collect_events functions. Do not edit any other part in this file. This file is generated only once when creating the modular input. ''' ''' # For advanced users, if you want to create single instance mod input, uncomment this method. def use_single_instance_mode(): return True ''' def validate_input(helper, definition): """Implement your own validation logic to validate the input stanza configurations""" # This example accesses the modular input variable # text = definition.parameters.get('text', None) # text_1 = definition.parameters.get('text_1', None) pass def collect_events(helper, ew):  opt_clientid = helper.get_arg('clientid')  opt_clientsecret = helper.get_arg('clientsecret')  opt_customerid = helper.get_arg('customerid')  opt_domainid = helper.get_arg('domainid')  opt_apihost = helper.get_arg('apihost')    tokenUrl = "https://" + opt_apihost + "/v1/oauth2/tokens"  post = [] files = [] s = requests.Session() e = (opt_clientid + ':' + opt_clientsecret) en = e.encode('utf-8') en64 = base64.urlsafe_b64encode(en) s.headers.update({ 'Accept': 'application/json' }) s.headers.update({ 'Authorization': 'Basic ' + str(en64.decode()) }) s.headers.update({ 'Content-Type': 'application/x-www-form-urlencoded' }) s.headers.update({ 'Host': opt_apihost }) f = s.post(tokenUrl, data=post, files=files, verify=False) r = json.loads(f.text) access_token = r['access_token'] url = "https://" + opt_apihost + "/v1/devices" parameters = {"authorization":access_token} final_result = [] # The following examples send rest requests to some endpoint. response = helper.send_http_request(url, 'GET', parameters=None, payload=None,headers=parameters, cookies=None, verify=True, cert=None,timeout=None, use_proxy=True) # get the response headers #r_headers = response.headers # get the response body as text #r_text = response.text # get response body as json. If the body text is not a json string, raise a ValueError r_json = response.json() for devices in r_json["devices"]: state = helper.get_check_point(str(devices["id"])) if state is None: final_result.append(devices) helper.save_check_point(str(devices["id"]), "Indexed") event = helper.new_event(json.dumps(final_result), time=None, host=None, index=None, source=None, sourcetype=None, done=True, unbroken=False) ew.write_event(event) The test code works fine, but 1. The events are being indexed, each value is not appearing as a seperate entity but is rather grouped  2. The data is not being updated in the interval mentioned in the beginning   
We want to integrate IBM xforce's free open source threat feed with splunk. How can I achieve this. I have IBMs api id and key. Any steps or documentations will be appreciated.
Hello community Trying to figure out what is blocking/affecting UF on Windows Agent was installed using CLI msiexec.exe /i splunkforwarder-<version>-x64-release.msi DEPLOYMENT_SERVER="<ip>:8089... See more...
Hello community Trying to figure out what is blocking/affecting UF on Windows Agent was installed using CLI msiexec.exe /i splunkforwarder-<version>-x64-release.msi DEPLOYMENT_SERVER="<ip>:8089" SPLUNKPASSWORD=<password> AGREETOLICENSE=yes /quiet Agent is installed, connects to deployment server and fetches apps/configuration. Looking at the log, it seems that the configuration is read properly as I see configuration in there, blacklist/whitelist and other things. Setup is UF -> HF -> IX, IX cannot be reached directly. Everything looks good but here’s where the issues start. Trying to execute Splunk commands including runtime does not work wile the service/agent is running. I get a blinking prompt and nothing happens. Shutting down the service/agent I can run commands, though then runtime commands do not work, and I can’t diagnose. This is one thing which seems off. Then, the system can reach both DS and HF, traffic is allowed. However, watching the traffic from/to the system I see regular traffic with the DS though nothing against the HF. Not even attempts to establish connections. This does not seem reasonable either, I would expect at least failed connections. We suspect that this is caused by managed configuration of the computer itself in some manner. However, any suggestions regarding possible ways to try to diagnose/solve this would be much appreciated.
I am calculating percentage for each https status code. But i also would like to display the total number of requests in results. Below query displays only percentage of each status code. Is there a ... See more...
I am calculating percentage for each https status code. But i also would like to display the total number of requests in results. Below query displays only percentage of each status code. Is there a way i can add _Total in results? Query: index=app_ops_prod host="sl55caehepc20*" OR host="sl55sdsapp001" OR host="sl55sdsapp002" source="/var/mware/logs/SDS*/*localhost*" method="POST" | timechart span=1d count by status|addtotals row=true fieldname=_Total|foreach * [eval <<FIELD>> = '<<FIELD>>' * 100 / _Total]
Hi All,   Does Splunk Security Essentials app also map our custom (user defined) correlation searches to different MITRE tactics ?  I know it does so for the pre defined ones but what about for user ... See more...
Hi All,   Does Splunk Security Essentials app also map our custom (user defined) correlation searches to different MITRE tactics ?  I know it does so for the pre defined ones but what about for user defined searches? Thanks
I'm developing a new app with react js and SplunkJS SDK. In my app, I want to call a custom endpoint service that locates in the same app with a URL       https://x.x.x.x:8089/servicesNS/-/... See more...
I'm developing a new app with react js and SplunkJS SDK. In my app, I want to call a custom endpoint service that locates in the same app with a URL       https://x.x.x.x:8089/servicesNS/-/<my-app>/run       In my call, I'm trying to call a custom endpoint:       handleSplunkConnect(event) { let http = new splunk_js_sdk.SplunkWebHttp(); return new splunk_js_sdk.Service( http, application_name_space, ); } makeCloudFlareAction(event){ let service = this.handleSplunkConnect() //make splunk service connect let params = { "customer": "JAUC-011", } service.post("/servicesNS/-/<my-app>/run", params, function(err, resp) { console.log(resp) console.log(err) });       But I see the URL request is `https://x.x.x.x:8000/en-US/splunkd/__raw/servicesNS/-/<my-app>/run?output_mode=json` instead of `https://x.x.x.x:8089/servicesNS/-/<my-app>/run?output_mode=json` My question is how can I call a custom endpoint service with Splunk:8089  by using SplunkJS SDK? OR, Can I call a custom endpoint service with port 8080 (web) instead of 8089 (splunkd) ?
Hi All, I have logs in splunk like below (this is one log): { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" }, "name": "source.mq.apac.tw.ebs.ft.ft.raw.i... See more...
Hi All, I have logs in splunk like below (this is one log): { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" }, "name": "source.mq.apac.tw.ebs.ft.ft.raw.int.rawevent", "tasks": [ { "id": 0, "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" } ], "type": "source" } { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" }, "name": "source.mq.apac.tw.cards.ecms.ecms.raw.int.rawevent", "tasks": [ { "id": 0, "state": "RUNNING", "worker_id": "mwgcb-csrla02u.nam.nsroot.net:8084" } ], "type": "source" } { "connector": { "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" }, "name": "sink.mq.apac.tw.cards.ecms.ecms.derived.int.sinkevents", "tasks": [ { "id": 0, "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" }, { "id": 1, "state": "RUNNING", "worker_id": "mwgcb-csrla01u.nam.nsroot.net:8084" } ], "type": "sink" } I have created below query to extract the fields and create a table of those values: ..... | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"(?P<Connector_State>[^\"]+)\"" | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"\w+\"\,\s+\"\w+\"\:\s\"(?P<Worker_ID>[^\:]+)" | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"\w+\"\,\s+\"\w+\"\:\s\"[^\:]+\:(?P<Port>\d+)\"" | rex field=_raw max_match=0 "\"connector\"\:\s\{\s+\"state\"\:\s\"\w+\"\,\s+\"\w+\"\:\s\"[^\"]+\"\s+\}\,\s+\"name\"\:\s\"(?P<Connector_Name>[^\"]+)\"" | search Connector_State=RUNNING | table Connector_Name,Worker_ID,Port It gives me the table in below format: Connector_Name Worker_ID Port source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net gtgcb-csrla01s.nam.nsroot.net gtgcb-csrla02s.nam.nsroot.net 8087 8087 8087 sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net 8087 source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla02s.nam.nsroot.net gtgcb-csrla01s.nam.nsroot.net 8087 8087 But the requirement is to get the table as below: Connector_Name Worker_ID Port source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla02s.nam.nsroot.net 8087 sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla01s.nam.nsroot.net 8087 sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net 8087 sink.mq.apac.hk.ebs.im.im.derived.int.sinkevents gtgcb-csrla02s.nam.nsroot.net 8087 source.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla02s.nam.nsroot.net 8087 sink.mq.apac.tw.cards.ecs.ecs.raw.sit.rawevent gtgcb-csrla01s.nam.nsroot.net 8087 Please help to modify the query to get the output in the desired manner.
Hi, may I know where can I get the latest ITSI Administration Manual documentation/pdf? Please assist.
I'm trying to extract and dashboard the latest number in my logs for each "7d" stat. Some sample logs: [db]: 00:05:01.000: 3ddesigns:total 173304125 [db]: 00:05:01.000: 3ddesigns:1d 253113 [d... See more...
I'm trying to extract and dashboard the latest number in my logs for each "7d" stat. Some sample logs: [db]: 00:05:01.000: 3ddesigns:total 173304125 [db]: 00:05:01.000: 3ddesigns:1d 253113 [db]: 00:05:01.000: 3ddesigns:7d 1435675 [db]: 00:05:01.000: 3ddesigns:30d 5863610 [db]: 00:05:01.000: 3dlessons:total 92148058 [db]: 00:05:01.000: 3dlessons:1d 103077 [db]: 00:05:01.000: 3dlessons:7d 539695 [db]: 00:05:01.000: 3dlessons:30d 2216809 [db]: 00:05:01.000: circuitsdesigns:total 62150103 [db]: 00:05:01.000: circuitsdesigns:1d 125770 [db]: 00:05:01.000: circuitsdesigns:7d 724227 [db]: 00:05:01.000: circuitsdesigns:30d 2936667 I have a search query but it gives me a Null field...is there a way to rename the fields?: obs_mnkr="tnkrcad-p-ue1" source="/disk/logtxt/stats.log" | multikv noheader=t | fields _raw | rex "3ddesigns:(?<designs>\w+)\s+(?<num>\d+)" | regex designs!="1d" | regex designs!="30d" | regex designs!="total" | rex "circuitsdesigns:(?<circuits>\w+)\s+(?<num>\d+)" | regex circuits!="1d" | regex circuits!="30d" | regex circuits!="total" | timechart span=1w last(num) by designs  
Hello, I have 3 fields from which I need to build a line chart on a Time series.   ServerTime Endpoint ResponseTime   I need to show  endpoint response time over a 95 percentile on serv... See more...
Hello, I have 3 fields from which I need to build a line chart on a Time series.   ServerTime Endpoint ResponseTime   I need to show  endpoint response time over a 95 percentile on servertime. So the servertime will be on the Y-Axis, the time series on the X-Axis and a legend that shows the endpoints. Can you please suggest a query that would achieve this.   Thank you