All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My Raw log says "message: (c4328dd3-d16e-4df8-a8e6-b2ebcab9d8bc)"  I wanted to extract everything  inside the  Parentheses ( )   Thanks in advance.
I have a csv that gets loaded weekly... timestamp for events are on load. However, this file has multiple time fields (first discovered, last seen, etc.). I am attempting to find those events (based ... See more...
I have a csv that gets loaded weekly... timestamp for events are on load. However, this file has multiple time fields (first discovered, last seen, etc.). I am attempting to find those events (based on the fields) that are greater than 30 days, for example. had this working fine, until I introduced a lookup. I am attempting to show results grouping them by owner (stats) but only those events that are 30 days from first discovered until now().  If I add | where Days > 30, results show every event from the fiel. But I know they are there... anonymized query below. What am I doing wrong? Sample fields being eval'ed:  First Discovered: Jul 26, 2023 16:50:26 UTC Last Observed: Jul 19, 2024 09:06:32 UTC   index=stuff  source=file Severity="Critical" | lookup detail.csv "IP Address" OUTPUTNEW Manager | eval First_DiscoveredTS = strptime("First Discovered", "%b %d, %Y %H:%M:%S %Z"), Last_ObservedTS = strptime("Last Observed", "%b %d, %Y %H:%M:%S %Z"), firstNowDiff = (now() - First_DiscoveredTS)/86400, Days = floor(firstNowDiff) | stats by Manager | where Days > 30    
I have installed Splunk Enterprise on an RHEL9 VM in AWS. I have tried installing via TAR and RPM. I also tried starting it as "root" and "splunk" users but it just won't start. It always hangs at th... See more...
I have installed Splunk Enterprise on an RHEL9 VM in AWS. I have tried installing via TAR and RPM. I also tried starting it as "root" and "splunk" users but it just won't start. It always hangs at the same point and when that happens I can't even SSH to my VM. I have to reboot the VM to get access to it again. It stays here for about 30 minutes (maybe longer). Then, I see the following. Any idea what might be going on?
Hello All, Can ya'll give me advice on why my query taking so long? In a dashboard it just times out and regular verbose it takes quite a bit of time. Purpose of the query is to simply just search ... See more...
Hello All, Can ya'll give me advice on why my query taking so long? In a dashboard it just times out and regular verbose it takes quite a bit of time. Purpose of the query is to simply just search my index and output me the result that match in the lookup url.  index=myindex sourcetype=mysource | stats count by url | fields - count | search [|inputlookup LCL_url.csv | fields url] | sort url Thank you
I think this is happen because its run on Windows I will try to install it on Linux first and i will let you know if the problem fixed thanks for the help dude
Please share more details
I´m probably very slow on the uptake from my side.  But supposedly I was to get a link for Splunk cloud by mail for my google security certificate.   But I did not get a direct link and neither do I ... See more...
I´m probably very slow on the uptake from my side.  But supposedly I was to get a link for Splunk cloud by mail for my google security certificate.   But I did not get a direct link and neither do I seem to find any acces after logging in on splunk.com
If I understand correctly, you would like a dashboard to have a table where clicking on any column except one does nothing, and then clicking on the one special column will cause another table to be ... See more...
If I understand correctly, you would like a dashboard to have a table where clicking on any column except one does nothing, and then clicking on the one special column will cause another table to be displayed? If so, it is possible to do this by adding an Interaction and setting a token to "name" (getting the column name), then setting a requirement in the search of the other table so that the token must equal that name value. Under the Visibility settings of the other table, check the box that says "When data is unavailable, hide element". More on this method: https://www.splunk.com/en_us/blog/tips-and-tricks/dashboard-studio-how-to-configure-show-hide-and-token-eval-in-dashboard-studio.html   And here is example JSON code for a dashboard where you need to click the "source" column to make the second table appear: { "visualizations": { "viz_m5IbYYDW": { "type": "splunk.table", "dataSources": { "primary": "ds_ePMHur2X" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "test", "key": "name" } ] } } ], "title": "$test$" }, "viz_ppdmDf4r": { "type": "splunk.table", "dataSources": { "primary": "ds_iMzJA85U_ds_ePMHur2X" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "test", "key": "name" } ] } } ], "title": "$test$", "hideWhenNoData": true } }, "dataSources": { "ds_ePMHur2X": { "type": "ds.search", "options": { "query": "index=*\n| head 10\n| table _time host source sourcetype ", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_1" }, "ds_iMzJA85U_ds_ePMHur2X": { "type": "ds.search", "options": { "query": "index=*\n| head 10\n| table _time host source sourcetype \n| head limit=100 ($test|s$ = \"source\" )", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_1 copy 1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": {}, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_m5IbYYDW", "type": "block", "position": { "x": 0, "y": 0, "w": 720, "h": 400 } }, { "item": "viz_ppdmDf4r", "type": "block", "position": { "x": 720, "y": 0, "w": 720, "h": 400 } } ], "globalInputs": [] }, "description": "", "title": "Make another table visible by clicking on a column" }
I think you'll find the contents of the multi-select token are a multi-value field.  That means any place you use $sub_competency$ must make sense with a multi-value field.  Perhaps | search Sub_Com... See more...
I think you'll find the contents of the multi-select token are a multi-value field.  That means any place you use $sub_competency$ must make sense with a multi-value field.  Perhaps | search Sub_Competency IN ("$sub_competency$") would work better. As @marnall suggested, it depends on what the token contents look like.
Something like this?   | eventstats range(count) as varies by HOST | where varies > 0   Here is an emulation you can play with and compare with real data. (I know that # is not a real field.  It ... See more...
Something like this?   | eventstats range(count) as varies by HOST | where varies > 0   Here is an emulation you can play with and compare with real data. (I know that # is not a real field.  It doesn't affect calculation here.)   | makeresults format=csv data="#,HOST,BGP_NEIGHBOR,BGP_STATUS,count 1,Router A,neighbor 10.1.1.1,Down,1 2,Router A,neighbor 10.1.1.1,Up,1 3,Router B,neighbor 10.2.2.2,Down,1 4,Router B,neighbor 10.2.2.2,Up,1 5,Router C,neighbor 10.3.3.3,Down,2 6,Router C,neighbor 10.3.3.3,Up,1 7,Router D,neighbor 10.4.4.4,Down,2 8,Router D,neighbor 10.4.4.4,Up,2" ``` the above emulates ..... | rex field=_raw "(?<BGP_NEIGHBOR>neighbor\s\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | rex field=_raw "(?<BGP_STATUS>(Up|Down))" | stats count by HOST, BGP_NEIGHBOR, BGP_STATUS ```   Combining this with the above search gives # BGP_NEIGHBOR BGP_STATUS HOST count varies 5 neighbor 10.3.3.3 Down Router C 2 1 6 neighbor 10.3.3.3 Up Router C 1 1
Can you put the tokens into the dashboard titles, including the dollar signs? They will be replaced with the current value of the input, which is helpful for debugging that they are not a wrong value... See more...
Can you put the tokens into the dashboard titles, including the dollar signs? They will be replaced with the current value of the input, which is helpful for debugging that they are not a wrong value. Alternatively you could post the source code of your inputs and search panel from your dashboard, so that we can see if there is a problem with them. (Be sure to censor any sensitive keywords in your source code)
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only li... See more...
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only line #5 and #6. How do i do that ?    My query: ...... | rex field=_raw "(?<BGP_NEIGHBOR>neighbor\s\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"  |  rex field=_raw "(?<BGP_STATUS>(Up|Down))"  |  stats count by HOST, BGP_NEIGHBOR, BGP_STATUS     #     HOST               BGP_NEIGHBOR       BGP_STATUS       count   1     Router A          neighbor 10.1.1.1          Down                    1 2     Router A          neighbor 10.1.1.1          Up                          1   3     Router B          neighbor 10.2.2.2          Down                   1 4     Router B          neighbor 10.2.2.2          Up                         1   5     Router C          neighbor 10.3.3.3         Down                    2 6     Router C          neighbor 10.3.3.3         Up                          1   7     Router D          neighbor 10.4.4.4         Down                   2 8     Router D          neighbor 10.4.4.4         Up                         2    
So the action works when you run it ad-hoc on a container, but not from within a playbook? If so, could you try making the action name shorter?
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests ... See more...
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests |fields TotResTime  Query2: |tstats count as Requests sum(attributes.latencyTime) as TotalatcyTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotlatencyTime=TotalatcyTime/Requests |fields TotlatencyTime We want to combine these 2 queries and create area chart panel.  how to do this??
I don't think that app will handle the Microsoft Graph authentication flow, at least not out of the box. You may be better off writing a script or making a custom modular input in the Splunk add-on b... See more...
I don't think that app will handle the Microsoft Graph authentication flow, at least not out of the box. You may be better off writing a script or making a custom modular input in the Splunk add-on builder to get those call records.
Are you sure that the NAS is fast enough to receive and write the logs of your Splunk cluster? You need a lot of bandwidth and IOPS, especially if the NAS is being used by multiple indexers at the sa... See more...
Are you sure that the NAS is fast enough to receive and write the logs of your Splunk cluster? You need a lot of bandwidth and IOPS, especially if the NAS is being used by multiple indexers at the same time.
Hello, I'm curious to know if you were able to successfully migrate from Windows to Linux? I opened a support ticket for help and they referred me to this forum posting, but the steps mentioned here... See more...
Hello, I'm curious to know if you were able to successfully migrate from Windows to Linux? I opened a support ticket for help and they referred me to this forum posting, but the steps mentioned here are not sufficient and some steps seem out of order. For example, it says to copy the home directory from old to new server and then install splunk on new server, but wouldn't this just overwrite the files you copied over? After opening a second ticket with a different Splunk support rep, they suggested I (1) install a default splunk instance on new server (2) copy only the $SPLUNK_HOME\var\lib and $SPLUNK_HOME\etc directories, as well as the directory containing my cold search DBs.  Lastly, they recommended I update my configuration (.conf) files to point to the new locations on the Linux server. However, received no specific guidance on which files to update other than the indexes.conf file.  The final recommendation from Splunk support was to check *all* configuration files in $SPLUNK_HOME\etc directory. When I pointed out that there are over 300 configuration files in our $SPLUNK_HOME\etc directory, they confirmed that we must check and update all 300+ files, which is not feasible for us. At this point I've given up, but maybe someone else on here has had success?
You could use the /services/search/v2/jobs REST endpoint   | rest /services/search/v2/jobs | search label = "SOC - *" | sort - updated | table label updated author ```add fields as desired```
What happens if you run btool on the settings stanza and grep for max_upload_size? e.g. /opt/splunk/bin/splunk btool web list settings | grep max_upload   If it shows a value other than 8000, the... See more...
What happens if you run btool on the settings stanza and grep for max_upload_size? e.g. /opt/splunk/bin/splunk btool web list settings | grep max_upload   If it shows a value other than 8000, then likely your web.conf file is in the wrong place, or being overridden by another.
Thank you for the assistance!