All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The query looks OK, but its speed also depends on how many events it is processing. Try running the search more often over a smaller time range. Try to reduce "indexes" to the smallest set of indexe... See more...
The query looks OK, but its speed also depends on how many events it is processing. Try running the search more often over a smaller time range. Try to reduce "indexes" to the smallest set of indexes that contain relevant Windows events. Consider removing the lookup and hard-coding the relevant event codes. index IN (indexes) sourcetype=xmlwineventlog sAMAccountName IN (_x*, x_*, lx*, hh*) | eval action = case(event_code=x, "login_failure", event_code=y, "lockout") | stats count(eval(action=="login_failure")) as failure_count, count(eval(action=="lockout")) as lockout_count by sAMAccountName | where failure_count >= 3 OR lockout_count > 0
Hi Dhana, I'm currently facing the same issue. Did you find a way to solve this issue without using a load balancer? Thanks.
Yes. You can name multiple capture groups in one rex statement.  e.g. | rex field=my_field "foo:\s+\"(?<first_capture>[^\"]+)\",\s+bar:\s+(?<second_capture>[^\"]+)"
index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" |eval timestamp=strftime(_time, "%Y-%m-%d %H:00") | stats values(B2BUnknownTrxCount) by timestamp
  index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" | eval timestamp=strftime(_time, "%F"),hour=strftime(_time, "%H,%M")  | stats list(hour) as hour... See more...
  index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" | eval timestamp=strftime(_time, "%F"),hour=strftime(_time, "%H,%M")  | stats list(hour) as hour, list(B2BUnknownTrxCount) by timestamp
this is the log data   i want a report like this:     my current query is : index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" |eval times... See more...
this is the log data   i want a report like this:     my current query is : index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/DFO.log" |eval timestamp=strftime(_time, "%F") | stats values(B2BUnknownTrxCount) by timestamp it giving report like this: I need to add date time in hh:mm in a chart.  Please help to update my query
Hello, I have been asked to optimize this logic because is taking too long to run. I am not sure how else can I write to make it run faster. It's not throwing any errors it just takes a long time to ... See more...
Hello, I have been asked to optimize this logic because is taking too long to run. I am not sure how else can I write to make it run faster. It's not throwing any errors it just takes a long time to run. Any help would be highly appreciate. Thanks!   index IN (indexes) sourcetype=xmlwineventlog sAMAccountName IN (_x*, x_*, lx*, hh*) | lookup mas_pam_eventcode.csv event_code AS EventCode OUTPUT action | stats count(eval(action=="login_failure")) as failure_count, count(eval(action=="lockout")) as lockout_count by sAMAccountName | where failure_count >= 3 OR lockout_count > 0
Sample dashboard { "visualizations": { "viz_3hKq7uoX": { "type": "splunk.singlevalue", "options": {}, "dataSources": { "primary": "ds_... See more...
Sample dashboard { "visualizations": { "viz_3hKq7uoX": { "type": "splunk.singlevalue", "options": {}, "dataSources": { "primary": "ds_a2mWNgri" } }, "viz_JUFcFWVl": { "type": "splunk.singlevalue", "options": {}, "dataSources": { "primary": "ds_LbaP4o2H" } } }, "dataSources": { "ds_a2mWNgri": { "type": "ds.search", "options": { "query": "| makeresults\n| eval selected_total = count($element$)\n| table selected_total" }, "name": "Search_eval" }, "ds_LbaP4o2H": { "type": "ds.search", "options": { "query": "| makeresults\n| stats count($element$) as selected_total\n| table selected_total" }, "name": "Search_stats" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_QDGCxYVt": { "options": { "items": [ { "label": "Apple", "value": "a" }, { "label": "Banana", "value": "b" }, { "label": "Coconut", "value": "c" }, { "label": "Dragonfruit", "value": "d" }, { "label": "Elderberry", "value": "e" }, { "label": "Fig", "value": "f" }, { "label": "Grape", "value": "g" } ], "token": "element" }, "title": "Select something", "type": "input.multiselect" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_3hKq7uoX", "type": "block", "position": { "x": 20, "y": 20, "w": 250, "h": 250 } }, { "item": "viz_JUFcFWVl", "type": "block", "position": { "x": 290, "y": 20, "w": 250, "h": 250 } } ], "globalInputs": [ "input_global_trp", "input_QDGCxYVt" ] }, "description": "test", "title": "test" }
Yes, column 3 should equal column 5. *** The last row contained a mistake, which I have corrected. Abc should always equal abc That's how the clean tables look: ID col5 a abc b ... See more...
Yes, column 3 should equal column 5. *** The last row contained a mistake, which I have corrected. Abc should always equal abc That's how the clean tables look: ID col5 a abc b abc b xyz f abc i abc i xyz     ID Col3 col4 a abc No a xyz Yes b xyz No b fgh Yes b abc No f abc No f xyz No i xyz Yes i abc No
Hi @Scott.Lucier, I wanted to share this AppD Documentation. Let me know if it helps out.  https://docs.appdynamics.com/appd/23.x/23.11/en/appdynamics-essentials/alert-and-respond/health-rules/h... See more...
Hi @Scott.Lucier, I wanted to share this AppD Documentation. Let me know if it helps out.  https://docs.appdynamics.com/appd/23.x/23.11/en/appdynamics-essentials/alert-and-respond/health-rules/health-rule-schedules
Is the goal to have Col5 appear in the row where its value is an exact match to Col3 ? Or is your last two rows in the output actually correct ? 
Hi,  I have the results of an append operation as follows: ID Col3 col4 col5 a     abc a abc No   a xyz Yes   b     abc b     xyz b xyz No   b... See more...
Hi,  I have the results of an append operation as follows: ID Col3 col4 col5 a     abc a abc No   a xyz Yes   b     abc b     xyz b xyz No   b fgh Yes   b abc No   f     abc f abc No   f xyz No   i     abc i     xyz i xyz Yes   i abc No   The result from the first table and the result from the second should be merged respectively. I cannot use | stats values(col1) values(col2) values(col3) by ID because I cannot lose the distinction between "No" and "Yes" for Col3. I want to create a result as follows: ID Col3 col4 col5 a abc No abc a xyz Yes   b xyz No xyz b fgh Yes   b abc No abc f abc No abc f xyz No   i xyz Yes xyz i abc No abc   I think something like SQL's full join would do the trick, but I am totally stuck.
So to clarify the <cids> is the placeholder for the values produced from the regex AND also the placement is where the actual value would be contained in the string, i.e. Log field?
Hey there, Your post is a couple months old, but since I stumbled into the same issue, I figured there will be more Splunkers in the future that encounter the same challenge and would appreciate if ... See more...
Hey there, Your post is a couple months old, but since I stumbled into the same issue, I figured there will be more Splunkers in the future that encounter the same challenge and would appreciate if the solution is documented somewhere. The first part of my response lays out how to resolve the issue, in the second part I talk about why the issue arises in the first place. Part 1 - How to resolve the issue Apps > DSDL App > Configuration > Setup > Check the "Yes" box > Scroll to the "Splunk Access in Jupyter (optioinal)" section > Use the following settings: Enable Splunk Access: Yes Splunk Access Token: Paste your token here. If you dont have one, you can spawn a token under Settings > Tokens. Splunk Host Address: Paste your host address here (in my case it has this format: 123.456.78.90) Splunk Management Port: 8089 (This is the default, if you did not change it, you can use 8089) Press "Test & Save" Apps > DSDL App > Configuration > Containers > Start your container. If your container was already running, stop it and restart it Apps > DSDL App > Configuration > Containers > Press the "JUPYTER LAB" button Now open the "barebone_template.ipynb" in the /notebooks folder Execute the code that pulls data from Splunk. Now it should work just fine. import libs.SplunkSearch as SplunkSearch search = SplunkSearch.SplunkSearch()   Part 2 - More details in case you are curious Execute the following code in your jupyter notebook. Here you can inspect all os variables.   import os os.environ     For us of interest are the following.   os.environ["splunk_access_host"] os.environ["splunk_access_port"] os.environ["splunk_access_token"]     If you haven't fixed the issue yet, os.environ["splunk_access_enabled"] should return "false". You most likely started the container before you made the settings as I described in part 1. These os.environ variables are important, since the function that lets you pull data from Splunk relies on them. The error in your screenshot "An error occurred: int() argument must be a sting, ..." stems from the fact that the SplunkSearch() function has no values for host/port/token.   import libs.SplunkSearch as SplunkSearch search = SplunkSearch.SplunkSearch()     You find the source code for the SplunkSearch function in your Jupyter Lab here: /notebooks/libs/SplunkSearch.py. Somewhere in the upper section of this Python code, you see the following.   if "splunk_access_enabled" in os.environ: access_enabled = os.environ["splunk_access_enabled"] if access_enabled=="1": self.host = os.environ["splunk_access_host"] self.port = os.environ["splunk_access_port"] self.token = os.environ["splunk_access_token"]   As you can see in the code above, the SplunkSearch.py reads the host, port, and token you entered on the settings page if you also set Enable Splunk Access: Yes. If you are familiar with Splunk's REST API, you recognize that host, port, and token are necessary values to establish a connection from your notebook to Splunk to eventually retrieve search results for your query. I skip the details, but here are a couple lines from SplunkSearch.py that illustrate what packages are used, the connection that is made, as well as the search query that is initiated.   import splunklib.results as splunk_results import splunklib.client as splunk_client self._service = splunk_client.connect(host=self.host, port=self.port, token=self.token) # create a search job in splunk job = self.service.jobs.create( query_cleaned, earliest_time=earliest, latest_time=latest, adhoc_search_level="smart", search_mode="normal")     I hope this helps. Regards, Gabriel  
Thank you. I was close ugh.
| rex "(?<month>\w+)-\d"
Hi deepakc and all,  took a while but I finally got around to solve this, even if in a far from elegant way. The error message appears to indeed belong to the certification process of AOB like deep... See more...
Hi deepakc and all,  took a while but I finally got around to solve this, even if in a far from elegant way. The error message appears to indeed belong to the certification process of AOB like deepakc mentioned. It's sort of a check if your app uses the best-practices or has risks etc. However, this is unlikely to have been the cause for why I wasn't able to get my data, despite my instance being able to connect to the internet.  However, there is one simple workaround:  --> Simply set the "verify" parameter in your http-request to "false". E.g:  response = helper.send_http_request("<your api link here>, "GET", parameters=None, payload=None, headers=headers, cookies=None, verify=False, cert=None, timeout=None, use_proxy=True) It's a little ugly solution but for test-purposes it does the job and I was finally able to receive the data from my API-Point.  This is probably not adivsable for productive systems or security reasons, though.  Thanks for the helpful input though and everyone else have fun while splunking! 
  Hi all I have an addon plugin that utilizes REST API to obtain specific logs; each generated event has fixed values for both source and sourcetype. Now there are customers who use props.conf and... See more...
  Hi all I have an addon plugin that utilizes REST API to obtain specific logs; each generated event has fixed values for both source and sourcetype. Now there are customers who use props.conf and transforms.conf that will change the value of the source according to a particular column within an event; for instance, if the service is 'a', then the source changes to 'service_a'; if service is 'b' then it changes to 'service_b'. The current problem is that obtaining logs works fine, and content can always be found using sourcetype. But when using transformed source to search, events cannot be found even though events with 'service_a' and 'service_b' are visible.   How should I adjust the addon or how should I configure local settings so that I can search using source?   Regards Emily
I want to extract Jan from Jan-24.
Hi @triva79 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors