All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, looks like I am missing something. I have a Splunk alert that is a bit spammy. I would like to use the Alert Manager app to give me one alert a day, basically the first time this alert shows up... See more...
Hi, looks like I am missing something. I have a Splunk alert that is a bit spammy. I would like to use the Alert Manager app to give me one alert a day, basically the first time this alert shows up. And be quiet for the rest of the day, just increase the duplicate counter.   I can get alerts to be counted as duplicates, but I still get e-mails for all of them. I have not found a way in the suppression rules to hide follow on alerts. thx afx
    Hi There, I need to fetch some data based on a unique ID from the different log lines can you please help me with the search query. Example for relevant logs with unique ID will be: Time=DDM... See more...
    Hi There, I need to fetch some data based on a unique ID from the different log lines can you please help me with the search query. Example for relevant logs with unique ID will be: Time=DDMMY ID=001 INFO Requester=Bob Time=DDMMY ID=001 INFO Request Type=Normal Time=DDMMYY ID=001 INFO Request Status=success   So, need them in this format Time                ID   Requester Request Type Request Status DDMMYY        001       Bob               Normal              Success   Please Help. Thanks in advance.  
Hi, i'm trying to get Splunk realtime results using splunk's python-sdk. Everything works well, but in the results, there is a missing second : I don't know whether there is a limitation or if I mis... See more...
Hi, i'm trying to get Splunk realtime results using splunk's python-sdk. Everything works well, but in the results, there is a missing second : I don't know whether there is a limitation or if I missed a parameter in the query? This is really embarrassing. Here's my python code : And here are the results: As you can see, the entire second :  '2020-11-25T10:42:26' is missing, and will never appear in the results. Do you have any idea where this might come from?   I even tried to "manually" create a timechart using ' search index=_internal bin _time span=10ms | chart count by _time'; and this this case, there is a missing millisecond (not second ). There is a missing second every 5-10 results I can't figure out why this is not working properly.
I have got a pentest results with the following : It was possible to access those endpoints unauthenticated : https://x.x.x.x/en-US/config https://x.x.x.x/en-GB/config https://x.x.x.x/en-US/inf... See more...
I have got a pentest results with the following : It was possible to access those endpoints unauthenticated : https://x.x.x.x/en-US/config https://x.x.x.x/en-GB/config https://x.x.x.x/en-US/info https://x.x.x.x/en-US/paths https://x.x.x.x/en-us/lists https://x.x.x.x/en-US/embed  Is it really a vulnerability ? They said that it's config data, not public data so it should not be visible. How can we remove those endpoints from being reached unauthenticated ?  
Hi, I am new to this custom extensions. I am trying to build just a dummy one to see how it works. I have the folder in the machine-agent-home/monitors and just a simple script that does a simple ... See more...
Hi, I am new to this custom extensions. I am trying to build just a dummy one to see how it works. I have the folder in the machine-agent-home/monitors and just a simple script that does a simple echo with a predefine metric. I just want to see it in the Metric Browser under Custom Metrics. This is all that the script contains :  printf "name=Custom Metrics|Disks %s|KB written/sec,aggregator=OBSERVATION,value=%d\n" "/test" 2 So it just prints: name=Custom Metrics|Disks /test|KB written/sec,aggregator=OBSERVATION,value=2 But in the metric browser, custom metrics is not visible under my server path if I expand the tree... I see this weird info in the log file "Forcing stop for Executable Command..." Below is the snipped from the log file as well as what the monitors.xml file contains. Can someone tell me what I am doing wrong ? Thank you very much in advance ! [Agent-Monitor-Scheduler-1] 24 Nov 2020 17:29:27,308 INFO ContinuousTaskMonitor - Continuous Task [AndreiTestMonitor] Restarted [Worker-10] 24 Nov 2020 17:29:27,308 INFO ExecTask - Initializing process for exec task [Worker-10] 24 Nov 2020 17:29:27,308 INFO ExecTask - Running executable script on disk [test.sh] [Worker-10] 24 Nov 2020 17:29:27,308 INFO ExecTask - Executing script [/opt/appdynamics/machine-agent/monitors/andrei_custom_ext/test.sh] [Worker-10] 24 Nov 2020 17:29:27,327 INFO ExecTask - No task arguments to add [Worker-10] 24 Nov 2020 17:29:27,328 INFO ExecTask - Initializing process builder with command list[/opt/appdynamics/machine-agent/monitors/andrei_custom_ext/test.sh] [Worker-10] 24 Nov 2020 17:29:27,328 INFO ExecTask - Initializing process builder with enviornment variables {} [Worker-10] 24 Nov 2020 17:29:27,328 INFO ExecTask - Running Executable Command [[/opt/appdynamics/machine-agent/monitors/andrei_custom_ext/test.sh]] [Worker-10] 24 Nov 2020 17:29:27,333 INFO ExecTask - Forcing stop for Executable Command [[/opt/appdynamics/machine-agent/monitors/andrei_custom_ext/test.sh]] [Worker-10] 24 Nov 2020 17:29:27,333 INFO MonitorStreamConsumer - Stopping monitored process Monitor.xml : <monitor> <name>AndreiTestMonitor</name> <type>managed</type> <enabled>true</enabled> <enable-override os-type="solaris">true</enable-override> <enable-override os-type="sunos">true</enable-override> <description>Test Andrei CUstom Extension </description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>continuous</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file os-type="linux">test.sh</file> <file os-type="mac">test.sh</file> <file os-type="windows">windows-stat.bat</file> <file os-type="solaris">test.sh</file> <file os-type="sunos">test.sh</file> <file os-type="aix">aix-stat.sh</file> <file os-type="z/os">zos-stat.sh</file> <file os-type="hp-ux">hpux-stat.sh</file> </executable-task> </monitor-run-task> </monitor>
We have the below query, which checks whenever a server is down. So we want this query to send an alert when the status changes from Stopped to Running. Now it only sends an alert when the status is ... See more...
We have the below query, which checks whenever a server is down. So we want this query to send an alert when the status changes from Stopped to Running. Now it only sends an alert when the status is Stopped.   index="init_butcher" sourcetype="services_status.out.log" host=* | chart useother=f values(status) as services over host by service limit=0 | eval status=if('abc'="STOPPED", "DOWN", "Critical") | where 'cfd'="STOPPED" OR 'hij'="STOPPED" OR ''="STOPPED" OR 'jkl'="STOPPED" OR 'mno'="STOPPED" OR 'pqr'="STOPPED" OR 'stu'="STOPPED" OR 'vux'="STOPPED" OR 'yz'="STOPPED" | fields butcher, host, status | mvcombine host delim="," | eval message="Butcher Services are at status: ".status." Host(s):".mvjoin(host,",")
Hello SMEs...good day, Here i would like to create one behavior based or can say anomaly based alert where we have fields name IP_ address and total_bytes (sum of bytes sent + byte_received). Now t... See more...
Hello SMEs...good day, Here i would like to create one behavior based or can say anomaly based alert where we have fields name IP_ address and total_bytes (sum of bytes sent + byte_received). Now the condition is if 'total_bytes' consumed by 'IP_Address' in the current week is more than 50% or less than 50% what has been consumed over previous week, we need one alert to be triggered. Any idea or query which can meet the requirement would be really helpful....Many thanks in advance
I have a task to move All users (except admins, nobody) KOs (Knowledge Objects) from search app, to their own apps. When I try to move the KO, I get below error. Replication-related issue: Cannot m... See more...
I have a task to move All users (except admins, nobody) KOs (Knowledge Objects) from search app, to their own apps. When I try to move the KO, I get below error. Replication-related issue: Cannot move asset lacking a pre-existing asset ID Online search shows, work around for this. We just re-save the splunk KO and them move it (to other app). But problem is we have thousands of splunk KOs. There is no way we can manually do this. I tired to automate with python script. I did not see rest endpoint "/save" or /re-save Endpoints for Views: <link href="/servicesNS/admin/search/data/ui/views/my_dashboard" rel="list"/> <link href="/servicesNS/admin/search/data/ui/views/my_dashboard/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/data/ui/views/my_dashboard" rel="edit"/> <link href="/servicesNS/admin/search/data/ui/views/my_dashboard" rel="remove"/> <link href="/servicesNS/admin/search/data/ui/views/my_dashboard/move" rel="move"/> I see /move but not /save. Need help, finding rest endpoint, so that I can script (The save, with NO changes and Move), for all Splunk KOs (Savedsearches, Views, Eventtypes, etc...) for users.
Hi, How to configure the agent for Oracle form metric monitoring? Thanks ^ Edited by @Ryan.Paredez this comment originally appeared on this thread: Oracle forms 
Hi All, I need to download/extract list of applications with last 6 months data ( business transactions/load) on SaaS controller. I tried loading list of 77 applications that we have but keeps on lo... See more...
Hi All, I need to download/extract list of applications with last 6 months data ( business transactions/load) on SaaS controller. I tried loading list of 77 applications that we have but keeps on loading and never loads. Also, tried to copy the data from the applications page ( for a shorter duration) but unable to copy. Anyone, please advise how this is achievable. Regards Rohit
Hi, Could someone please help me with the rename of the string Values in the fields. I want to remove the spaces from the below-highlighted values here is the query I'm trying : index="index"... See more...
Hi, Could someone please help me with the rename of the string Values in the fields. I want to remove the spaces from the below-highlighted values here is the query I'm trying : index="index" sourcetype=_json |spath path="results{}.summary" output=Summary| spath path="results{}.description" output=Description |spath path="results{}.category" output=Category |spath path="results{}.sysAdmin" output=SysAdmin |rename Values(Summary) as values|rename values("User deactivated") AS "User_deactivated" | table Summary Category SysAdmin
As title suggest, i want to index internal logs only and forwards all other logs to forwarders or idxs. Here is the setup : I have one cluster and three indexes setup seperately outside cluster.... See more...
As title suggest, i want to index internal logs only and forwards all other logs to forwarders or idxs. Here is the setup : I have one cluster and three indexes setup seperately outside cluster. Cluster has CM, SH and three indexers. Those Three indexers i want to use as Heavy forwarder to send all logs out to external indexes Following is default output.conf: [tcpout] maxQueueSize = auto forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup) forwardedindex.filter.disable = false indexAndForward = false Here is what I have done outputs.conf   [tcpout] defaultGroup=noforward disabled=false [indexAndForward] index=true selectiveIndexing=true [tcpout:forwarders] server:<forwarders>:9997       Below is my props.conf     [default] TRANSFORMS-forwardit = forwardit [host::*.foo.splunk.com] TRANSFORMS-routing = indexing     Below is transforms.conf     [forwardit] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = forwarders [indexing] REGEX = . DEST_KEY = _INDEX_AND_FORWARD_ROUTING FORMAT = local       Essentially all internal indexes should stay within cluster indexes but rest of index or logs forwarded to external indexes.
We are using the TA in Splunk Cloud. As the TA is installed on the Splunk IDM, we notice that when we try to create any input, we are unable to select the custom index we created in the indexer. Is... See more...
We are using the TA in Splunk Cloud. As the TA is installed on the Splunk IDM, we notice that when we try to create any input, we are unable to select the custom index we created in the indexer. Is there a feature that allows us to select our custom index by dropdown?
  I wrote the below python code, which is giving me only first 100 events. I checked online docs, i saw "count = 0" as a solution, to get all results, but that option only works for Splunk SDK (spl... See more...
  I wrote the below python code, which is giving me only first 100 events. I checked online docs, i saw "count = 0" as a solution, to get all results, but that option only works for Splunk SDK (splunklib.client.service) I am using python's requests library. Need help in looping/pagination of all the results of this search id (%sid)   import requests import json url = base_url + "/services/search/jobs/%s/results" % sid headers = { "content-type": "application/x-www-form-urlencoded", "Authorization": "Splunk %s" % sessionkey } payload = { "output_mode": "json" } res = requests.get(url, headers=headers, params=payload, verify = False) result = json.loads(res.text)["results"] print("length is %s" % len(result)) =================> Output here is 100    
Hello, Having trouble understanding lookups.  Any help would be appreciated. If I have a table with ID and User columns and run below it works. |makeresults |eval ID="1" |lookup test.csv ID output... See more...
Hello, Having trouble understanding lookups.  Any help would be appreciated. If I have a table with ID and User columns and run below it works. |makeresults |eval ID="1" |lookup test.csv ID output User But if I add a second eval and run: |makeresults |eval ID="1" |eval column="User" |lookup test.csv ID output column I get and error "Count not find all of the specified destination fields in the lookup table". I need to be able to evaluate the column before passing it to the lookup command.  Is that possible?  Thanks!
I am trying to get a time difference of two events and using timechart, I wants to display MAX(time difference value ) in a span of 30seconds;  Below query is working with table and I want to use tim... See more...
I am trying to get a time difference of two events and using timechart, I wants to display MAX(time difference value ) in a span of 30seconds;  Below query is working with table and I want to use timechart - Any help appreciated.  index=express_its_pds_solsup_dce sourcetype="express:dce:shipmentlog" "EWPX - WCO: Begin saving" | rex field=_raw "build.(?<SHIP_FILE>[\S+]*)" | stats latest(_time) as begin_time by SHIP_FILE | join SHIP_FILE [ search index=express_its_pds_solsup_dce sourcetype="express:dce:shipmentlog" "EWPX - WCO: End" | rex field=_raw "build.(?<SHIP_FILE>[\S+]*)" | stats latest(_time) as end_time by SHIP_FILE ] | eval ship_throughput = end_time-begin_time | table SHIP_FILE, ship_throughput
Hi Everyone! I'm having a stuff time trying to figure out a search command for this lab assignment. So I inputted in the search bar, (source=/var/log/auth.log session | top user) and I got the users ... See more...
Hi Everyone! I'm having a stuff time trying to figure out a search command for this lab assignment. So I inputted in the search bar, (source=/var/log/auth.log session | top user) and I got the users and a count but not sure if thats the session count. I've tried other ones but don't seem to get the results I need. I need a search command that'll show the opened and closed sessions based on each user so I could create a pie chart showing that data. I'm currently a student in cybersecurity and I'm new to Splunk. Would appreciate the help. Thank you!    
Hi  I have created the below rex command based on user agent using regular expression " regex101.com". The below rex command works fine in regex , please find below . However when i execute the same... See more...
Hi  I have created the below rex command based on user agent using regular expression " regex101.com". The below rex command works fine in regex , please find below . However when i execute the same command in Splunk search i am getting an error message as output  Match 1 Full match 12-62 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0 Group `os` 13-23 Windows NT Group `os_version` 24-41 10.0; Win64; x64; Group `layout_engine` 42-49 Trident Group `engine_version` 50-53 7.0 Group `browser` 55-57 rv Group `browser_version` 58-62 11.0   Err msg - rror in 'rex' command: regex="\((?<os>\w+\s+\w+)\s(?<os_version>[^;]+.[^\)][^;]+.[^\)][^;]+.)\s(?<layout_engine>\w+).(?<engine_version>\w+.\d+).\s(?<browser>\w+).(?<browser_version>\w+.\d+)" has exceeded configured match_limit, consider raising the value in limits.conf. User agent - Mozilla/5.0 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0) like Gecko Rex command - | rex "\((?<os>\w+\s+\w+)\s(?<os_version>[^;]+.[^\)][^;]+.[^\)][^;]+.)\s(?<layout_engine>\w+).(?<engine_version>\w+.\d+).\s(?<browser>\w+).(?<browser_version>\w+.\d+)" Thanks 
Hi, I am subscribed to the NVD CVE rss feed that I receive via splunk. When one device matches I have an alert. The issue is that when the RSS feed is updated (with us every 24 hours) it also updat... See more...
Hi, I am subscribed to the NVD CVE rss feed that I receive via splunk. When one device matches I have an alert. The issue is that when the RSS feed is updated (with us every 24 hours) it also updates the date and time of the event and therefore gives me the same alert as yesterday. My alert check every day at 8.00 PM if new cve with my device. For example, I receive one alert yesterday for new cve checkpoint and today I have receive the same  alert with the same cve. index=main *Checkpoint* | table publish,cve,link Could you help me?   Thanks.
Hi, we are planning to use splunk mint iOS sdk, but we need to support iOS 14 with new xcode 12.2. please let me know if this is under work.