All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@michael_vi First, thank you for the help! I was able to get this script updated here and it puts it in a alphabetical list. Thank you again import requests import json # Splunkbase API endpoin... See more...
@michael_vi First, thank you for the help! I was able to get this script updated here and it puts it in a alphabetical list. Thank you again import requests import json # Splunkbase API endpoint to get a list of all apps splunkbase_api_url = "https://splunkbase.splunk.com/api/v1/app/" # Initialize an empty list to store all apps all_apps = [] # Initialize offset and batch size offset = 0 batch_size = 25  # You can adjust this if needed while True:     # Construct the URL with the current offset     params = {"offset": offset}     response = requests.get(splunkbase_api_url, params=params)         if response.status_code == 200:         # Parse the JSON response to access the list of apps         apps = response.json()                 # Add the retrieved apps to the list         all_apps.extend(apps['results'])  # Use ['results'] to get the apps                 # Calculate the total number of apps from the response         total_apps = apps['total']                 # If we have collected all apps, break the loop         if offset >= total_apps:             break                 # Increase the offset for the next request         offset += batch_size     else:         print("Failed to retrieve apps. Status code:", response.status_code)         break # Extract 'appid' (app names) from the 'results' app_names = [app['appid'] for app in all_apps] # Sort the app names in alphabetical order app_names_sorted = sorted(app_names) # Create a dictionary with the sorted app names as a list app_data = {"app_names": app_names_sorted} # Save the app data to a JSON file as before output_file_path = "splunkbase_apps.json" with open(output_file_path, 'w') as json_file:     json.dump(app_data, json_file, indent=4)  # Use indent to format JSON print(f"App data has been saved to {output_file_path}")
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, a... See more...
I am trying to merge two datasets which are results of two different searches on a particular field value common to both. The field I want to merge on is not a 'primary key' of any of the datasets, and therefore there's multiple events in each of these datasets with a given value of this field. My expected result is that for each event in the first dataset with a particular value of that field, I will end up producing n events in the resulting dataset, where n is the number of events in the second dataset that have that particular value in the field. So for example, if I have 3 events with that field value in dataset A and 4 events with that particular field value in dataset B, then I expect to have 12 events in the result dataset (after the merge). What Splunk command/s would be useful to merge these datasets in this fashion? 
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, ... See more...
Good Afternoon,   I have been trying to fix this error for a few weeks now. The app was working fine and just stopped out of no where a few months ago. I have attempted full reinstalls of the app, searching all over google and the splunk community page I have looked at multiple errors similar from other apps and none of the solutions helped. Permissions are correct as well any help would be greatly appreciated!  The full error is "Unable to initialize modular input "redfish" defined in the app "TA-redfish-add-on-for-splunk" : introspecting scheme=redfish: script running failed (PID 4535 exited with code 1)"
Hi @swayam.pattanayak, Given how old this post is and it did not get a reply, you may want to contact AppD Support for more help at this time. How do I submit a Support ticket? An FAQ 
@av_  Thank you for the additional detail. Makes sense. I would handle that as two separate jobs. I would say this is the solution and can be cleanly executed.  
I need to get the  list of Adhoc Searches and Saved search running by user in Audit logs. how to differentiate these searches in _audit logs, is there any specific keyword to identify the searches 
Hi @Dustem, yes, you should create an alert, scheduled e.g. one time  day like the following: index="xx" | bin _time span=15m | stats dc(dest_port) as dc_ports by _time src_ip dest_ip | where dc_po... See more...
Hi @Dustem, yes, you should create an alert, scheduled e.g. one time  day like the following: index="xx" | bin _time span=15m | stats dc(dest_port) as dc_ports by _time src_ip dest_ip | where dc_ports > 10 | streamstats count as consecutive_triggers by src_ip dest_ip reset_on_change=Ture | where consecutive_triggers>=5 | collect index=my_summary that triggers the conditons you need and saves results in a summary index. then if the alert is named "scan" you can search on the summary for the search_name="scan" in the last three days: index=my_summary search_name=scan | stats count BY src_ip dest_ip | where count>5 Obviously you have to adapt my approach to your Use case. Ciao. Giuseppe
You have two issues here - firstly timechart fills in the blanks so even if there isn't any data for the middle days, you will still get zero counts - secondly the chart viz will create a timeline ba... See more...
You have two issues here - firstly timechart fills in the blanks so even if there isn't any data for the middle days, you will still get zero counts - secondly the chart viz will create a timeline based on _time (whether there is data in the results table or not) To solve this, firstly, you need to use bin and chart, and secondly you need to create a string field for the time | bin _time span=1d | chart <your aggregate function> by _time LiftState | eval time=strftime(_time, "%F") | fields time * | fields - _time
Glad you found a solution that works! its very temperamental it seems, but a really useful piece of code to save masses of whitespace when no results
OK,  I think I've gotten there with this search: index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Resp... See more...
OK,  I think I've gotten there with this search: index=felix_emea sourcetype="Felixapps:prod:log" Action = "Resp_VPMG" | dedup EventIndex | rex field=Message "^<b>(?<Region>.+)<\/b>" | rex "Response Codes:\s(?<responseCode>\d{1,3})" | rex field=Message ":\s(?<errCount>\d{1,4})$" | bin _time span=1h | stats count by _time responseCode Region | eval {Region}=count | fields - count Can you tell me the purpose of thise line in your code: | fields - log_level count  as to me that drops the 'count' and 'log_level' fields yet log_level is a value you are trying to chart. Thanks, Steve  
With high expectations I tried it and sadly it didn't work for me. However, building on your idea that it didn't like the CSS to be initially empty I eventually found the following solution: <dashbo... See more...
With high expectations I tried it and sadly it didn't work for me. However, building on your idea that it didn't like the CSS to be initially empty I eventually found the following solution: <dashboard version="1.1" theme="light"> <label>GABS Test css</label> <row> <panel> <table id="doneID"> <title>Test</title> <search> <done> <condition match="'job.resultCount' == 0"> <set token="doneTableHeightCSS">height: 50px !important;</set> <set token="doneTableAlertCSS">position:relative; top: -130px !important;</set> </condition> <condition> <set token="doneTableHeightCSS"></set> <set token="doneTableAlertCSS"></set> </condition> </done> <query> | stats count | search count=5 </query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> </table> </panel> </row> <row depends="$never_set$"> <panel> <html> <style> #doneID .splunk-table { 1; $doneTableHeightCSS$ } #doneID .alert-info{ 1; $doneTableAlertCSS$ } </style> </html> </panel> </row> </dashboard>   More than happy to give you the credit for it. If you re-submit it I'll accept it as a solution.
@efavreau You're right about 21:00 Sunday GMT would be 5:00 Monday BJT but if you see the cron, it is running from 0-13 on a Sunday which is incorrect and we need to exclude that. How can we exclude... See more...
@efavreau You're right about 21:00 Sunday GMT would be 5:00 Monday BJT but if you see the cron, it is running from 0-13 on a Sunday which is incorrect and we need to exclude that. How can we exclude this extra cron schedule.
Thanks for the feedback. This worked for me!
@kamlesh_vaghela is this something you could take a look at? I saw you answering similar questions. 
Hi @av_ ! Your expression looks correct to account for the 8 hour difference, assuming the cron job is executing in your timezone. 21:00 Sunday GMT would be 5:00 Monday BJT. So if that is not workin... See more...
Hi @av_ ! Your expression looks correct to account for the 8 hour difference, assuming the cron job is executing in your timezone. 21:00 Sunday GMT would be 5:00 Monday BJT. So if that is not working as expected, then the cron job may not be running out of your GMT timezone. If it's running out of the BJT timezone, then the cron needs to be re-written to: */5 5-22 * * 1-5 Did you test that? What was the result? If that also isn't working, then more details are needed for people to figure out why the cron is executing in neither timezone
This is still very confusing But in our query, we used to get results only for previous month, not for the date range(its not accepting the double quotes ("") for the number, when I remove "". its... See more...
This is still very confusing But in our query, we used to get results only for previous month, not for the date range(its not accepting the double quotes ("") for the number, when I remove "". its not accepting the string :(.. query we used for reference... First of all, WHAT is not accepting what?  Are you talking about an editor in Splunk Answers (this forum), a Splunk UI, a Splunk dashboard, or Splunk search window, or SPL?  What does "not accepting" mean?  Does the UI give you some error message?  Does Splunk give an Error?  Or you are expecting one output and Splunk gives a different output? If so, what is the input, what is the context, what is the expected output and why do you expect that output in this context? Second, let me try to interpret your question: You are saying that a user sees different results when using these selections in Splunk's pre-defined time selector: 1 "Previous month" in "Presets" 2 "Between" selector in "Date range" Is this accurate?  What is relationship between that SPL snippet and Splunk's time selector, or your dataset?
Hello Splunkers ! I am looking for a way to monitor and retrieve the user that logged into my Linux machine, but only the user part of the root or wheel groups, or who are present in any sudoers f... See more...
Hello Splunkers ! I am looking for a way to monitor and retrieve the user that logged into my Linux machine, but only the user part of the root or wheel groups, or who are present in any sudoers file. I was able to get the user who ssh into my machine using the 'var/log/secure' file but my challenge was to check if the user "have a lot of rights or not". I have some idea in mind to achieve this but maybe there were something out of the box with Splunk or the TA Nix Add-On.... If somebody already tried to so something similar, any help would be appreciated ! Thanks ! GaetanVP
Hi @efavreau, changing it from 1-5 would miss some alerts which are supposed to trigger monday morning. As I said there's a time difference. The splunk instance is in GMT while the alerts are being s... See more...
Hi @efavreau, changing it from 1-5 would miss some alerts which are supposed to trigger monday morning. As I said there's a time difference. The splunk instance is in GMT while the alerts are being scheduled for China Time.
Hi @av_ ! To double-check cron expressions, I may resort to using a tool like crontab guru. When I put the expression you provided in there, it suggests the 0-5 part of the cron expression includes ... See more...
Hi @av_ ! To double-check cron expressions, I may resort to using a tool like crontab guru. When I put the expression you provided in there, it suggests the 0-5 part of the cron expression includes Sunday. https://crontab.guru/#*/5_21-23,0-13_*_*_0-5 So if we change that part from 0-5 to 1-5, it appears that may work for you. Good luck! If you find this hopeful please give it a thumbs up!
Can I do this by writing SPL?