All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I did tried with below query where as i am getting action results edit but i am not able see what is edited like deep dive result. Basically i need to see if anyone in the roles edited, added and del... See more...
I did tried with below query where as i am getting action results edit but i am not able see what is edited like deep dive result. Basically i need to see if anyone in the roles edited, added and deleted something in splunk . index=_audit user!=splunk-system-user user!="n/a" (action=edit OR action=create OR action=delete) | table _time user, action info host Result Table: Date&time: aaaaaaaaa user: AAAAAA action: edit_deployment_client, edit_user(This result i need to see what is edited by user in deep dive result) host: BBBBBBBB Thanks in adavance
Is it possible to filter the logs based on http header value? I am conducting a load testing by using Jmeter. While searching the logs, huge unwanted micro service calls, internal calls making me ... See more...
Is it possible to filter the logs based on http header value? I am conducting a load testing by using Jmeter. While searching the logs, huge unwanted micro service calls, internal calls making me tired to filter them. To solve this, i plan to send a custom header with each api calls and filter it in Splunk. I did this in other APM tools. Please let me know the possibilities.. Thanks in advance.
Hello Splunk Community: I'm trying to convert several stand alone Python scripts into splunk External Lookups and running into problems. Any thoughts? I've looked at the external_lookup.py ... See more...
Hello Splunk Community: I'm trying to convert several stand alone Python scripts into splunk External Lookups and running into problems. Any thoughts? I've looked at the external_lookup.py example that ships with Splunk and created a simple example that should just output the first field and create content for the second field. It's working in the Splunk CLI sh-3.2# cat csv_test.csv field1,field2 mydataexample, sh-3.2# cat csv_test.csv | /Applications/Splunk/bin/splunk cmd python test_output.py field1 field2 field1,field2 mydataexample,NoField2_Data But not from the Splunk UI index="ex_firewall" accept or allowed | stats count by dst_ip | lookup test_output.py dst_ip as field1 Throwing error Error in 'lookup' command: Could not construct lookup 'test_output, dst_ip, as, field1'. See search.log for more details. I've also placed the script in the the proper location /Applications/Splunk/etc/apps/splunk/etc/system/bin/test_output.py Added it to the transforms.conf sh-3.2# cat /Applications/Splunk/etc/apps/splunk/etc/system/local/transforms.conf # Example external lookup #[dnslookup] #external_cmd = external_lookup.py clienthost clientip #fields_list = clienthost,clientip # Test output external lookup [test_output.py] external_cmd = test_output.py field1 field2 fields_list = field1, field2 external_type = python This is the script itself #!/usr/bin/env python ### # # Testing Stub - Splunk lookup external/scripted # ### import csv import sys import socket def main(): #Check input if len(sys.argv) != 3: print "Usage: python thisfile.py [field1] [field2]" sys.exit(1) #Input field1 = sys.argv[1] field2 = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() # Do something with the fields for result in r: # if both fields are there write out if result[field1] and result[field2]: w.writerow(result) elif result[field1]: result[field2] = "NoField2_Data" w.writerow(result) elif result[field2]: result[field1] = "NoField1_Data" w.writerow(result) main()
Hello, We have Hortonworks Hive 3.1 and when we are trying to query the connection from Splunk DB connect to HortonWorks Hive. I see the SQL method error. But we are using the same Drivers th... See more...
Hello, We have Hortonworks Hive 3.1 and when we are trying to query the connection from Splunk DB connect to HortonWorks Hive. I see the SQL method error. But we are using the same Drivers that our internal apps are using.
I'd like to pull a complete listing of all domain controllers in my environment and I'd like to do it through Splunk. Does anyone have some helpful SPL that can query the network for this?
Hello, I have the following search: index="_internal" sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index"OR alert_actions="" ) | rex field=savedsearch_id "(.+... See more...
Hello, I have the following search: index="_internal" sourcetype="scheduler" thread_id="AlertNotifier*" NOT (alert_actions="summary_index"OR alert_actions="" ) | rex field=savedsearch_id "(.+;.+;(?P<title>.+))" | timechart useother=f limit=200 span=1h count by title | addcoltotals labelfield=_time label="Total Sum" | addtotals | sort _time desc | table _time, Total, * | where Total != 0 | rename Total AS "#Alerts/h" it should build the table of the triggered alerts with the frequency per hour. Basically it works fine, but I would like to achieve one more thing: - I would like the columns with the highest "Total sum" to be ordered from left to right. The goal is that I can see the alerts being executed the most straight away on the left side of the table. How would I achieve this? Kind Regards, Kamil
I want to monitor certain events and all Error/Critical level events. https://answers.splunk.com/answers/663023/how-to-monitor-wineventlogsystem-event-logs-for-cr.html [WinEventLog://Applicati... See more...
I want to monitor certain events and all Error/Critical level events. https://answers.splunk.com/answers/663023/how-to-monitor-wineventlogsystem-event-logs-for-cr.html [WinEventLog://Application] disabled = 0 index = wineventlog interval = 60 whitelist = 1000, 1001, 11707, 11724, 104 whitelist2 = Type="^[1|2]" Tried it with and without whitelist commented out (thinking it was overriding it). It isn't picking up the events.
Hi All! I have a base search that just reports users connected to a vpn service. index=netvpn | stats count by user Very simple, i then want to run those users against an ldapsearch and... See more...
Hi All! I have a base search that just reports users connected to a vpn service. index=netvpn | stats count by user Very simple, i then want to run those users against an ldapsearch and get their employeeType and displayName, is there anyway i can subsearch to achieve this? I have already tried: index=netvpn sourcetype="pulse:connectsecure" | stats count by user [| ldapsearch domain=*obfuscated* search="(sAMAccountName=$user$)" attrs="employeeType displayName" ] | table employeeType, displayName But i don't get any results, am i close? Oh and the output of "user" in the base search is the sAMAccountName in Active Directory, so i shouldn't need to rename any fields. I do also have a scheduled search running to output the base search to a csv, if it makes the process any easier! Many thanks, Chris
hello We have a role that have permission to view an app, we have configured SAML GROUP to associate an AD group to that role. Users in this group are viewing the app correctly and it is working ... See more...
hello We have a role that have permission to view an app, we have configured SAML GROUP to associate an AD group to that role. Users in this group are viewing the app correctly and it is working ok for long time. i have problem when new user is added to this group. no meter what i try , the new user cannot access the specific app , and cannot even see the app in the app list. between the things i have tried "reloading authentication configuration" , /debug/refresh but that didn't help what am i missing in this process setting permission to new user in existing group ? how can i debug the problem and understand why this user cannot access (or view ) the app ? Some information about our environment : SPLUNK 7.2.5.1 , in clustered environment Thanks Avishni
one of my team has installed the forwarder on a Windows client. running tcpdump on the backend of splunk enterprise shows: 08:32:06.990056 IP xxx.56097 > splunk.xxx.9997: Flags [P.], seq 777:895, ... See more...
one of my team has installed the forwarder on a Windows client. running tcpdump on the backend of splunk enterprise shows: 08:32:06.990056 IP xxx.56097 > splunk.xxx.9997: Flags [P.], seq 777:895, ack 1, win 512, length 118 08:32:06.990080 IP splunk.xxx.9997 >xxx.56097: Flags [.], ack 895, win 2512, length 0 my receiver is enabled on port 9997 but Splunk is not indexing the data. I have other clients using the same setup and they are being indexed. Thoughts/Suggestions?
Hi I have a python code which pulls data from a source using REST API and updates the KVStore collection. However i want this to run on a period basis say once in 24 hours. What is the best way to s... See more...
Hi I have a python code which pulls data from a source using REST API and updates the KVStore collection. However i want this to run on a period basis say once in 24 hours. What is the best way to schedule this? Regards, V
I've generated an image of our network perimeter with the major equipment (Cisco ASA, Palo Alto, etc) providing a visual image of how data flows through the perimeter wall. The image is on the left ... See more...
I've generated an image of our network perimeter with the major equipment (Cisco ASA, Palo Alto, etc) providing a visual image of how data flows through the perimeter wall. The image is on the left hand pane of the dashboard. The image is contained in a dashboard through the use of the HTML element to make it clickable. On the right hand pane I have a table. What I want to do is to use the clickable image in the left pane to display events in the right hand table pane depending on what piece of equipment they click. The fundamental query will change depending on the equipment (firewall versus IDS versus VPN, etc) Does anyone have a good solution (other than using a dropdown input box?
Hi, I'm trying to email out a 24 hour report for the global protect activity page but it was greyed out, I understand that this is because of the form so I've cloned the dashboard (adding it in as a... See more...
Hi, I'm trying to email out a 24 hour report for the global protect activity page but it was greyed out, I understand that this is because of the form so I've cloned the dashboard (adding it in as a new item in the UI) and removed the form but now I just get "search is waiting for input..." in every panel on the new dashboard. How do I specify a time period now I've removed the form if that's the issue? My search string for the first panel is | where event_id="globalprotectgateway-logout-succ" OR event_id="globalprotectgateway-regist-succ" | timechart values(count) by event_id | eval event=event_id | rename globalprotectgateway-regist-succ AS "Login" | rename globalprotectgateway-logout-succ AS Logout Thanks for any help, I'm still quite new to this so sorry if it's a silly question? The end goal I'm after is the global protect activity dashboard, just for one firewall pair (we only have GP running on one and a test firewall anyway) and for last 24 hours emailed out once every 24 hours. in the email report I just get Invalid earliest_time. Cheers, Steve.
Currently I am running the below query to generate a report based on appname,spacename,orgnameand foundation which takes longer to run(for the past 24 hours search). How do I create a summary index ... See more...
Currently I am running the below query to generate a report based on appname,spacename,orgnameand foundation which takes longer to run(for the past 24 hours search). How do I create a summary index to efficiently use this query for faster results? sourcetype="pcf:log" | eval report_create_time=strftime(now(), "%Y-%m-%d %H:%M:%S,%3N") | eval spanID_ = coalesce(span_id, SPAN_ID, x_b3_spanid, spanId) |stats count(spanID_) AS spanCount by report_create_time cf_app_name, cf_space_name, cf_org_name, foundation
Hi Team, We are using splunk addon for service now to get cmdb table data from snow and We are seeing duplicate entries for the same sys_id seems whenever the CI record is updated it created new e... See more...
Hi Team, We are using splunk addon for service now to get cmdb table data from snow and We are seeing duplicate entries for the same sys_id seems whenever the CI record is updated it created new event for splunk. Please let me know how to correlate these events. we are trying to save the license and storage and don't like to update the same data again.
The Java Agent for IBM WebSphere collects the following metric by default Metric Browser --> Application Infrastructure Performance --> <TIERNAME> -->JVM --> Threads --> Current No. of Threads ... See more...
The Java Agent for IBM WebSphere collects the following metric by default Metric Browser --> Application Infrastructure Performance --> <TIERNAME> -->JVM --> Threads --> Current No. of Threads What does this value correspond to in IBM WebSphere? It seems to be constantly increasing on our environment and I've seen values going up to 9000. However, this is not equivalent to the Thread Pools and the equivalent Thread Pool Count that we see in WebSphere or through JMX against particular Thread Pools. So what does this metric actually represent?
We have passwords in clear text for ms-Mcs-AdmPwd in Splunk. So, want to mask the password in Splunk. As we are using Splunk Cloud, please let me know the steps to do the same. Do we need Splunk Sup... See more...
We have passwords in clear text for ms-Mcs-AdmPwd in Splunk. So, want to mask the password in Splunk. As we are using Splunk Cloud, please let me know the steps to do the same. Do we need Splunk Support intervention?
Time modifier is not working with splunk rest API. Below is the query. curl -k -u 'xxxxxxxxx:xxxxxxxxx' https://api-splunk.com:8089/services/search/jobs/export -d search="search earliest=@mon-10d... See more...
Time modifier is not working with splunk rest API. Below is the query. curl -k -u 'xxxxxxxxx:xxxxxxxxx' https://api-splunk.com:8089/services/search/jobs/export -d search="search earliest=@mon-10d latest=@mon+20d index=usage_summary report=LicenseUsage | search idx= | dedup _time,idx | timechart span=1d sum(MBytes) as LicenseUsagePerDay| stats avg(LicenseUsagePerDay) as AveLicenseUsagePerDay | table AveLicenseUsagePerDay" -d output_mode=csv it's not displaying any data but when we replace latest=@mon+20d to latest=now then it's working fine.
I have 2 searches for systems & folders. Both searches return a table. The fields systemID & folderID have the same values. (systemID = folderID) I want the foldername as a column correlated ... See more...
I have 2 searches for systems & folders. Both searches return a table. The fields systemID & folderID have the same values. (systemID = folderID) I want the foldername as a column correlated with the systemID in the systems table. Is this possible? Thank you in advance! index=a sourcetype="systems" system_name systemID field3 index=a sourcetype="folders" foldername folderID field3
Hi, I have a question regarding transaction analytics data storage. From the documentation:  Transaction Analytics (on-prem) Instrument up to 1,000,000 business transaction even... See more...
Hi, I have a question regarding transaction analytics data storage. From the documentation:  Transaction Analytics (on-prem) Instrument up to 1,000,000 business transaction events per 24-hour period (limited to 90 days of data storage) per license unit. Customer is not entitled to access the AppDynamics-hosted Events Service.  Is it possible to store analytics data for more than 90 days?  Best regards, Fredrik