All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Could you please advise on the below error while i'm testing the splunk sdk python code to search the given index in splunk. HTTPError: HTTP 503 Service Unavailable -- Search not execut... See more...
Hi Team, Could you please advise on the below error while i'm testing the splunk sdk python code to search the given index in splunk. HTTPError: HTTP 503 Service Unavailable -- Search not executed: This search could not be dispatched because the role-based disk usage quota of search artifacts for user "test01" has been reached (usage=497MB, quota=100MB). Use the [[/app/search/]] to delete some of your search artifacts, or ask your Splunk administrator to increase the disk quota of search artifacts for your role in authorize.conf., usage=497MB, quota=100MB, user=test01, concurrency_category="historical", concurrency_context="user_instance-wide". This error is coming often for sometimes and after i could able to search the index in splunk.In the splunk sdk i want to handle this error properly to execute the code meaningful without any errors. we want to pass the above quota error and refresh or kill the jobs which got stuck or over running. ultimately, we could overcome that error as we are seeing it all the time, we should somehow catch that error and display list of running jobs/search queries which are queued to the user. Also that when we cancelled all queries and we know that no queries currently running.how to say that it’s reached quota.What’s currently running in quota.We don’t want to submit more queries and we know about quota.. please advise how do we handle the quota error in the below code snippet to pass the quota error. splunk sdk code The input are as follows: '--search_query', 'index=some index | table field1,field2,field3 | head 1000', '--earliest_time', ' -24h', '--latest_time', 'now' service = splunk_connect() splunk_search_kwargs = {"exec_mode": "blocking", "earliest_time":args.earliest_time.strip(), "latest_time":args.latest_time.strip(), "enable_lookups": "true"} jobs = service.jobs splunk_search_job = jobs.create("search "+args.search_query, splunk_search_kwargs) result_count = int(splunk_search_job['resultCount']) print(f'{get_dt()} - No. of rows returned from search query... {result_count}') if (result_count <= 100): r = splunk_search_job.results({"count": 100, "output_mode": "json"}) obj = json.loads(r.read()) sample_results = json.dumps(obj['results'], indent=4) print(f'{get_dt()} - displaying first 100 rows {sample_results}') else: try: r = splunk_search_job.results(**{"output_mode": "json"}) obj = json.loads(r.read()) fl_nm = f'{args.save_file}/{get_dt()}.json' with open(fl_nm, 'w') as f: f.write(json.dumps(obj['results'])) except Exception as error: print(error) Thanks
For a scheduled report, is it possible not to attach the CSV file when there's no results, but when there are results, do attach the CSV file? Looks like there's only this option in savedsearches.... See more...
For a scheduled report, is it possible not to attach the CSV file when there's no results, but when there are results, do attach the CSV file? Looks like there's only this option in savedsearches.conf : action.email.sendcsv = [1|0] * Specify whether to send results as a CSV file. * Default: 0 (or the 'sendcsv' setting in the alert_actions.conf file)
My data as following Location|No.of active US|200 UK|20 SZ|30 How to accum all those location by month by area chart I now search as w search as |timechart span=1mon count by loca... See more...
My data as following Location|No.of active US|200 UK|20 SZ|30 How to accum all those location by month by area chart I now search as w search as |timechart span=1mon count by location| accum us as us| accum uk as uk | accum sz as sz | fields - uk us sz | fillnull
We have this Alert Action App working on Splunk Enterprise 7.1.3. Search Head Cluster. Linux. Splunkbase says it is compatible with 7.2. Has anybody tested it on 7.3?
I installed the latest version of Splunk on my local machine and play around. I created a index with various sourcetypes, but search doesn't work without index specified. I saw another post here, htt... See more...
I installed the latest version of Splunk on my local machine and play around. I created a index with various sourcetypes, but search doesn't work without index specified. I saw another post here, https://answers.splunk.com/answers/71694/search-without-index-not-working.html, but I have no ideas where to edit? Can anyone help? Thanks.
I have a question. Can I use splunk's time picker in a calculation? Now he always searches for 30 days |eval minPercentage=round((duration/2592000)*100,1) I would like to replace 2592000.1... See more...
I have a question. Can I use splunk's time picker in a calculation? Now he always searches for 30 days |eval minPercentage=round((duration/2592000)*100,1) I would like to replace 2592000.1 with month to date and 1 with last month. index=onboarding sourcetype="ping:output" | xmlkv | search succeed_count=* description="" | transaction ip_adress startswith=succeed_count=1 |search eventcount!=1 | eval Notification=case(duration>=14400,"Not available for more than 4 hours",1=1,"Sign up") |search Notification!= "Sign up" |eval duration=duration-14400 | append [| makeresults |eval duration="0" ] |stats sum(duration) as duration |eval minPercentage=round((duration/2592000*)*100,1) |eval percentage=100-minPercentage |fields percentage
I want a table that looks like this. Where the first column UserID is the identity. The second column is the earliest timestamp when the ID appears. The sum of the viewing time of the third col... See more...
I want a table that looks like this. Where the first column UserID is the identity. The second column is the earliest timestamp when the ID appears. The sum of the viewing time of the third column relative to the 3 days after the ID appeared. I don't know how to set the time range, because the time is three days after the earliest time stamp. And it is different for each ID.
Hi, I'm working on a akamai json and I want to extract the OS name from the message.UA field. Basically, if you look at the fake sample string below, I only want to get Windows (the part of the str... See more...
Hi, I'm working on a akamai json and I want to extract the OS name from the message.UA field. Basically, if you look at the fake sample string below, I only want to get Windows (the part of the string between the first ( and the next %: Mozilla%2f5.0%20(Windows%20NT%2018.0%3b%20Win64%3b%20x64)%20AppleWebKit%2f580.36%20(KHTML,%20like%20Gecko)%20Chrome%2f81.0.4042.140%20Safari%2f537.36 I already created a regular expression that will do exactly what I want but I'm not able to make it work with rex (As you can imagine I'm new in Splunk). Here is how I'm trying to use it. index=akamai | regex field = message.UA "(?<=\()(.*?)(?=\%)"| top message.UA When I run it I get a Error in 'rex' command: The regex '(?<=()(?=\%)' does not extract anything. It should specify at least one named group. Format: (?...). Any idea of how to accomplish this extraction? Thanks!
Hi everyone, I am following this Splunk doc regarding how to restore a reduce buckets: https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Reducetsidxdiskusage#Restore_reduced_buckets_to_th... See more...
Hi everyone, I am following this Splunk doc regarding how to restore a reduce buckets: https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Reducetsidxdiskusage#Restore_reduced_buckets_to_their_original_state My questions is how to restore reduced buckets for an entire index? Will the rebuild buckets be replicated to other peer nodes? Cheers, S
Hi, Does anyone know how to export browser snapshots? Regards, Rodson Silva
Good Afternoon We are looking at a pilot project to use Splunk to help manage our desktop inventory using the Microsoft_windows_TA add-on and a universal forwarder installed on the desktops. ... See more...
Good Afternoon We are looking at a pilot project to use Splunk to help manage our desktop inventory using the Microsoft_windows_TA add-on and a universal forwarder installed on the desktops. The only information we will be extracting at this time is the Windows host information: System Name, OS, IP address, and logged-on user. We are wondering how we can set up the universal forwarders on the desktops so that when they phone home to the deployment server, and we access the forwarder management page under settings (Settings>forwarder Management), we can segregate the desktops from the universal forwarders currently on the servers, if this is possible that is? Just wondering if there was an easy way to do this so there are not pages and pages of desktops and servers phoning home making it hard to separate the two when accessing the forwarder management page. We are a small shop therefore no clustering. Any guidance would be appreciated. Thank you
What happened to the free Splunk Insights for Infrastructure download link? I can't seem to find it. I have been using this cached Google search result `https://www.splunk.com/en_us/download/splunk-i... See more...
What happened to the free Splunk Insights for Infrastructure download link? I can't seem to find it. I have been using this cached Google search result `https://www.splunk.com/en_us/download/splunk-insights-for-infrastructure.html' but that brings me to the enterprise app. Has it been discontinued?
On a universal forwarder version 7.3.4. I am seeing the following errors with btool checks during restart: Invalid key in stanza [force_sourcetype_for_cisco_asa] in /opt/splunkforwarder/etc/apps/... See more...
On a universal forwarder version 7.3.4. I am seeing the following errors with btool checks during restart: Invalid key in stanza [force_sourcetype_for_cisco_asa] in /opt/splunkforwarder/etc/apps/Splunk_TA_cisco-asa/default/transforms.conf, line 2: DEST_KEY (value: MetaData:Sourcetype). and No spec file for: /opt/splunkforwarder/etc/apps/TA-cisco_acs/local/app.conf The invalid stanza error occurs for the multiple stanzas of the default transforms.conf for every app installed. The "No spec file" occurs for multiple apps, most often the app.conf file, but a few other .conf files as well. I compared the keys in the errors to the same keys in another instance that does not give any errors and they match exactly. Would this have anything to do with the fact that this is a universal forwarder being used as a heavy forwarder?
Hello everyone, I am trying to extract several “NEW” fields from a field and I am having trouble doing so. The field I am trying to extract from is a default field in an index but for some reason, ... See more...
Hello everyone, I am trying to extract several “NEW” fields from a field and I am having trouble doing so. The field I am trying to extract from is a default field in an index but for some reason, the field name and its contents are not located in the "_raw" field. So, I am unable to use the built-in Splunk extractor to accomplish what I am trying to do. The contents of the sourcefield varies as seen below. sourcefield=/var/log/bash_history/localuser/DOMAIN\first.last.domain • I need to extract "localuser" as field1, "DOMAIN" as field 2, and "first.last.domain" as field3. sourcefield=/var/log/bash_history/DOMAIN\first.last.domain/DOMAIN\first.last.domain • I need to extract “DOMAIN” as field2 and “first.last.domain” as field3 Would it make sense to use the first example to extract all fields since both content paths share similar strings with the exception of “localuser”? That way, if the “localuser” field doesn’t exist it would just see it as NULL value? Any help will be greatly appreciated.
I have this data coming in: {"endpointType":"MAC","appName":"Tracker","endpointId":"1d11dd05-a8a9-11e9-a74b-873869538d14","ip":"192.168.41.1","endpointName":"tess-mbp.lan","timestampUTC":"2020-05-... See more...
I have this data coming in: {"endpointType":"MAC","appName":"Tracker","endpointId":"1d11dd05-a8a9-11e9-a74b-873869538d14","ip":"192.168.41.1","endpointName":"tess-mbp.lan","timestampUTC":"2020-05-27T17:07:49Z","userName":"john","type":"FileSystemObserver","hostname":"test.com","userItemId":"rm-71a7812d-9444-11e8-8e37-8b2186626e5a","clientIp":"11.212.222.240","host":"dev.test.com:192.168.48.5","userEmail":"john@test.com","details":"{\"message\":\"{\\\"type\\\":\\\"File\\\", \\\"action\\\":\\\"Renamed\\\", \\\"timestamp\\\":\\\"1590599269\\\", \\\"path\\\":\\\"/Users/john/Library/Application Support/Google/Chrome/Default/Service Worker/CacheStorage/eadf114e35641d8a14aa9648d8e1c01b4b3bb3f0/index.txt\\\", \\\"sysinfo\\\":\\\"{\\\"ItemRenamed\\\",\\\"ItemIsFile\\\"}\\\"}\"}","authType":"MEMBER_ENDPOINT","requestSignature":"POST_/v3/report","epochTime":"1590599269","user-agent":"RR Endpoint/ag-2.10.1.797 (Darwin; 19.4.0; x86_64; tests-mbp.lan; 78:4f:41:7e:e1:06)"} Data from details is not getting extracted. I need to get all data from details in separate fields, like: type: File action: Renamed path: Users/john......... sysinfo: ItemRenamed: If someone could help, it would be very appreciated.
I have a table that shows me the username, the web resource they accessed, total number of times they accessed each file (FileCount) and the summation of all web resources they accessed. The problem... See more...
I have a table that shows me the username, the web resource they accessed, total number of times they accessed each file (FileCount) and the summation of all web resources they accessed. The problem I am seeing is that when a user accessed say 8+ resources, the results in my table grows very long for that user. In some cases, some users hit over 50 resources. My question is 2 parts: a) Is there a way to TRUNCATE or limit this part of the table? I've seen results show up as TRUNCATED before in a table but don't recall how that was done. I want no more than 5 rows per user with largest counts first. But I still want Total FileCount to be the full number and accurate count #. b) sort the FileCount list? How the table currently looks user Total FileCount Resource FileCount jsmith 5 file1 5 jdoe 30 file1 1 file10 3 file2 2 file3 2 file4 1 file5 7 file6 3 file7 1 file8 9 file9 1 How I WANT the table to look user Total FileCount Resource FileCount jsmith 5 file1 5 jdoe 30 file8 9 file5 7 file10 3 file6 3 file2 2 Current SPL | makeresults count=35 | streamstats count | eval user = case(count=1 OR count=2 OR count=3 OR count=4 OR count=5, "jsmith", count=6 OR count=7 OR count=8 OR count=9 OR count=10 OR count=11 OR count=12 OR count=13 OR count=14 OR count=15 OR count=16 OR count=17 OR count=18 OR count=19 OR count=20 OR count=21 OR count=22 OR count=23 OR count=24 OR count=25 OR count=26 OR count=27 OR count=28 OR count=29 OR count=30 OR count=31 OR count=32 OR count=33 OR count=34 OR count=35, "jdoe") | eval resource = case(count=1 OR count=2 OR count=3 OR count=4 OR count=5 OR count=6, "file1", count=7 OR count=8, "file2", count=9 OR count=10, "file3", count=11, "file4", count=12 OR count=13 OR count=14 OR count=15 OR count=16 OR count=17 OR count=18, "file5", count=19 OR count=20 OR count=21, "file6", count=22, "file7", count=23 OR count=24 OR count=25 OR count=26 OR count=27 OR count=28 OR count=29 OR count=30 OR count=31, "file8", count=32, "file9", count=33 OR count=34 OR count=35, "file10") | stats count AS Files by user resource | eventstats sum(Files) AS TotalFiles by user resource | stats sum(Files) AS "Total FileCount", list(resource) AS Resource, list(TotalFiles) AS "FileCount" by user
Current I am using "authentication/current-context" endpoint to check the roles of current user, and check if "admin" is one of it. This way is not very flexible as we do not want to always use an ad... See more...
Current I am using "authentication/current-context" endpoint to check the roles of current user, and check if "admin" is one of it. This way is not very flexible as we do not want to always use an admin account to edit the app. Meanwhile, in the UI or via CLI, we are able to grant R/W access to different roles for each app. Wondering if there is another endpoint to check the roles that have R/W access for a specific app? If we can get the information, we can have better access control without using admin accounts.
Hello, I'm trying to audit knowledge object usage. Is there really no way to log when a knowledge object is called? Thanks and Regards.
My Splunk environment was humming right along until I had a need to very quickly add several thousand new FWDs and a bunch of new apps on those endpoints collecting many new sources and sourcetypes. ... See more...
My Splunk environment was humming right along until I had a need to very quickly add several thousand new FWDs and a bunch of new apps on those endpoints collecting many new sources and sourcetypes. I happen to have an intermediate forwarding tier of HWFs, but I don't know if that makes any difference. After I added all these new FWDs, sources and sourcetypes, I noticed my rate of ingestion on all my IDXs dropped a lot. Restarting IDXs via rolling restart doesn't seem to make a difference. What's going on?
My company has its splunk instance set up in such a way that windows event logs are being enriched with AD information such as the users manager and their OU group etc etc. The system admin that set ... See more...
My company has its splunk instance set up in such a way that windows event logs are being enriched with AD information such as the users manager and their OU group etc etc. The system admin that set that up has since left the company and noone knows how it was done. Is there an add on or something with the forwarders that could be doing this? can this be configured to add other data to the logs? Thank you