All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I don't see p_index When I create it, how exactly do I configure it? What do I put under definition?
@heathramos Your dashboard isn't populating because it's looking for data in places that don't exist in your environment.  The main culprit is probably the p_index macro. Your dashboard is using `p_... See more...
@heathramos Your dashboard isn't populating because it's looking for data in places that don't exist in your environment.  The main culprit is probably the p_index macro. Your dashboard is using `p_index` but this macro either doesn't exist or isn't pointing to the right place. Go to Settings > Advanced Search > Search Macros and see if you have one called p_index. If not, create it. If yes, make sure it's set to your actual Palo Alto index. tip: When you're in the Search app, you can Cmd+Shift+E (Mac) or Ctrl+Shift+E (Windows) to expand macros in your search and see what they actually resolve to. This will show you exactly what `p_index` is doing. Second issue - sourcetype mismatch. The dashboard expects sourcetype="pan:xdr_incident" but your data probably has a different sourcetype. Run this to see what you actually have: index=pan | stats count by sourcetype Quick test: Try running the base search manually with your actual values instead of the tokens. Replace $severity$ with * and see if you get any results. The dashboard is basically looking for some field names like incident_id, severity, status etc. If your XDR data doesn't have these exact field names, nothing will show up. Most of these Palo Alto app dashboards assume you've configured everything exactly as Palo Alto intended, but real environments are messier. You'll probably need to either: Fix your data inputs to match what the dashboard expects, OR Edit the dashboard searches to match your actual data structure Start with that macro expansion trick and sourcetype check - those are usually the smoking guns. Good luck! If this Helps, Please Upvote.
Have you added some options to this dashboard which could cause that user cannot see those input fields? And is it only time input or other inputs too?
if I run the following search, I get records: index="pan" host="*" none of the dashboards show any info what could cause this?  
I'm not sure if this helps you https://splunkbase.splunk.com/app/3124 ?
Hi r. Ismo, Thank you for your return mail but i have still my active edu email account. It is trial version needs to be completed in 60 days but i am in valid since May 18 2025 so almost 1 month pa... See more...
Hi r. Ismo, Thank you for your return mail but i have still my active edu email account. It is trial version needs to be completed in 60 days but i am in valid since May 18 2025 so almost 1 month passed. Who can check my account from splunk admin panel?  thank you
Old post about those additional and undocumented parameters https://community.splunk.com/t5/Security/splunk-show-decrypted-command-usage/m-p/656369/highlight/true#M17251
I am still using active email
I was using my account last weeks with my edu email but what happened? 
As other already said there is probably firewall between your workstation and splunkd running on your RHEL7 box. It could be on RH or if there is any FW between network segments then those are possib... See more...
As other already said there is probably firewall between your workstation and splunkd running on your RHEL7 box. It could be on RH or if there is any FW between network segments then those are possible candidates. One way to try it is use ssh tunneling from your workstation to that box (if it's allowed on RH side). Or you could try it with curl on that box to test if it response or not. Based on your screenshot it should be up and running.    
When you send valid json object you can query any data what it contains. You could utilize those keys or use just any words which it has.
Hi there are couple of versions which you can use. https://splunkbase.splunk.com/app/4355 https://github.com/paychex/Splunk.Conf19 just add git client and do regular git add + git commit + git p... See more...
Hi there are couple of versions which you can use. https://splunkbase.splunk.com/app/4355 https://github.com/paychex/Splunk.Conf19 just add git client and do regular git add + git commit + git push from script This needs more manual works when you need to restore those into SHC. r. Ismo
Have you created that table with stats ... values? You should try list instead of values and then those lines will keep their orders and amounts also match. After that you could use above mvzip trick ... See more...
Have you created that table with stats ... values? You should try list instead of values and then those lines will keep their orders and amounts also match. After that you could use above mvzip trick to split those to correct rows.
Sample 1: I sent the logs from Mendix to Splunk, but all the messages are saved within message.  { level: ERROR env: test Message: {"Module": SplunkTest""Microflow": ACT_Splunk_Create_Test""lates... See more...
Sample 1: I sent the logs from Mendix to Splunk, but all the messages are saved within message.  { level: ERROR env: test Message: {"Module": SplunkTest""Microflow": ACT_Splunk_Create_Test""latesterror_message": "401: Access Denied at SplunkTest.ACT_Omnext_Create_TEST (CallRest : 'Call REST (POST)') Advanced stacktrace:"http_status": "401"http_response_content": "{ "statusCode": 401, "message": "Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription." }"http_reasonphrase": "Access Denied"session_id": "13314141414141212} but i would like to extract some data from the message as below { level: ERROR env: test Module: SplunkTest Microflow:  ACT_Splunk_Create_Test http_reasonphrase: Access Denied session_id: 13314141414141212 } My question is, can this message adjustable like my wish from Splunk. or Do i need to find a way to send data from Mendix in a structured way.
Hi You should always have separate user for running UF on any box. What this user name should be and is it local or centrally managed depends on your company's policies. Anyhow it should be somethin... See more...
Hi You should always have separate user for running UF on any box. What this user name should be and is it local or centrally managed depends on your company's policies. Anyhow it should be something else than root! Earlier that user was splunk as also in enterprise. In some phases it has changed to splunkfwd. I'm not sure if it's currently again splunk or still splunkfwd.  If/when you are using your OS's package manager to install Splunk UF then it creates that user and usually you don't need to take care of it. But when you are using tar.gz package and install it manually or with some scripts, you must create that OS level user by yourself. The most important task is check that this user owns all files under SPLUNK_HOME and the correct OS user name is used in enable boot startup settings! Basically this user name can be what ever you want, but if/when you are using something else than those default you must do chown -R always after you have update UF version! With earlier splunk versions you must grant access for this user to your monitored log files. Currently this is not needed if/when you are using systemd start scripts. It this change good or not is another story? You could look more: https://splunk.my.site.com/customer/s/article/Universal-Forwarder-is-able-to-ingest-files-that-it-does-not-have-permission-to-read https://community.splunk.com/t5/Installation/Security-issue-Splunk-UF-v9-x-is-re-adding-readall-capability/td-p/649047 https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.3/working-with-the-universal-forwarder/manage-a-linux-least-privileged-user r. Ismo
Hi this is not your University's support site. You must contact directly to their support email or chat and ask they check and fix your access issue. r. Ismo
Have you try dedup with sortby? And of course you should use bin with a new column like index=main | bin _time as time span=1month | dedup time sortby _time | table bill_date ID Cost _time In that... See more...
Have you try dedup with sortby? And of course you should use bin with a new column like index=main | bin _time as time span=1month | dedup time sortby _time | table bill_date ID Cost _time In that way it should take only one event per month. Modifying sort order it will be first or last event in month. 
Hi @sverdhan , did you tried to use the lookup command (https://help.splunk.com/en/splunk-enterprise/search/spl-search-reference/9.4/search-commands/lookup) instead of inputlookup in your search? t... See more...
Hi @sverdhan , did you tried to use the lookup command (https://help.splunk.com/en/splunk-enterprise/search/spl-search-reference/9.4/search-commands/lookup) instead of inputlookup in your search? the lookup command is like a left join. | tstats count WHERE index=* sourcetype=A4Server by index | rex field=index max_match=0 "(?<clients>\w+)(?<sensitivity>_private|_public)" | fields - count | lookup appserverdomainmapping.csv client OUTPUT NewIndex, Domain, Sourcetype | eval NewIndex= NewIndex.sensitivity | table clients, sensitivity, Domain, Sourcetype, NewIndex Ciao. Giuseppe  
Thanks, but that is still not working. Its only grabbing the very first ID. The data will have many IDs to one bill_date to multiple event times/_time.      
Hi @avikc100 , please, next time, send the search in text mode (using the Insert/Edit Code Sample button) so you can mask the sensitive data and we can use it. At first, don't use search or where a... See more...
Hi @avikc100 , please, next time, send the search in text mode (using the Insert/Edit Code Sample button) so you can mask the sensitive data and we can use it. At first, don't use search or where after the main search, but put all the conditions as left as possible, possibly in the main search: index="webmethods_prd" source="/apps/WebMethods*/IntegrationServer/instances/default/logs/MISC.log" MISC_dynamicPrice mainframePrice!=discountPrice | stats count BY mainframePrice discountPrice accountNumber itemId  otherwise you could add the dc function to identify the different values: index="webmethods_prd" source="/apps/WebMethods*/IntegrationServer/instances/default/logs/MISC.log" MISC_dynamicPrice | stats dc(mainframePrice) AS mainframePrice_count dc(discountPrice) AS discountPrice_count first(mainframePrice) AS first_mainframePrice first(discountPrice) AS first_discountPrice last(mainframePrice) AS last_mainframePrice last(discountPrice) AS last_discountPrice BY accountNumber itemId | where mainframePrice_count>1 OR discountPrice_count>1 | fields - *_count Ciao. Giuseppe