All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think you'll find the contents of the multi-select token are a multi-value field.  That means any place you use $sub_competency$ must make sense with a multi-value field.  Perhaps | search Sub_Com... See more...
I think you'll find the contents of the multi-select token are a multi-value field.  That means any place you use $sub_competency$ must make sense with a multi-value field.  Perhaps | search Sub_Competency IN ("$sub_competency$") would work better. As @marnall suggested, it depends on what the token contents look like.
Something like this?   | eventstats range(count) as varies by HOST | where varies > 0   Here is an emulation you can play with and compare with real data. (I know that # is not a real field.  It ... See more...
Something like this?   | eventstats range(count) as varies by HOST | where varies > 0   Here is an emulation you can play with and compare with real data. (I know that # is not a real field.  It doesn't affect calculation here.)   | makeresults format=csv data="#,HOST,BGP_NEIGHBOR,BGP_STATUS,count 1,Router A,neighbor 10.1.1.1,Down,1 2,Router A,neighbor 10.1.1.1,Up,1 3,Router B,neighbor 10.2.2.2,Down,1 4,Router B,neighbor 10.2.2.2,Up,1 5,Router C,neighbor 10.3.3.3,Down,2 6,Router C,neighbor 10.3.3.3,Up,1 7,Router D,neighbor 10.4.4.4,Down,2 8,Router D,neighbor 10.4.4.4,Up,2" ``` the above emulates ..... | rex field=_raw "(?<BGP_NEIGHBOR>neighbor\s\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | rex field=_raw "(?<BGP_STATUS>(Up|Down))" | stats count by HOST, BGP_NEIGHBOR, BGP_STATUS ```   Combining this with the above search gives # BGP_NEIGHBOR BGP_STATUS HOST count varies 5 neighbor 10.3.3.3 Down Router C 2 1 6 neighbor 10.3.3.3 Up Router C 1 1
Can you put the tokens into the dashboard titles, including the dollar signs? They will be replaced with the current value of the input, which is helpful for debugging that they are not a wrong value... See more...
Can you put the tokens into the dashboard titles, including the dollar signs? They will be replaced with the current value of the input, which is helpful for debugging that they are not a wrong value. Alternatively you could post the source code of your inputs and search panel from your dashboard, so that we can see if there is a problem with them. (Be sure to censor any sensitive keywords in your source code)
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only li... See more...
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only line #5 and #6. How do i do that ?    My query: ...... | rex field=_raw "(?<BGP_NEIGHBOR>neighbor\s\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"  |  rex field=_raw "(?<BGP_STATUS>(Up|Down))"  |  stats count by HOST, BGP_NEIGHBOR, BGP_STATUS     #     HOST               BGP_NEIGHBOR       BGP_STATUS       count   1     Router A          neighbor 10.1.1.1          Down                    1 2     Router A          neighbor 10.1.1.1          Up                          1   3     Router B          neighbor 10.2.2.2          Down                   1 4     Router B          neighbor 10.2.2.2          Up                         1   5     Router C          neighbor 10.3.3.3         Down                    2 6     Router C          neighbor 10.3.3.3         Up                          1   7     Router D          neighbor 10.4.4.4         Down                   2 8     Router D          neighbor 10.4.4.4         Up                         2    
So the action works when you run it ad-hoc on a container, but not from within a playbook? If so, could you try making the action name shorter?
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests ... See more...
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests |fields TotResTime  Query2: |tstats count as Requests sum(attributes.latencyTime) as TotalatcyTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotlatencyTime=TotalatcyTime/Requests |fields TotlatencyTime We want to combine these 2 queries and create area chart panel.  how to do this??
I don't think that app will handle the Microsoft Graph authentication flow, at least not out of the box. You may be better off writing a script or making a custom modular input in the Splunk add-on b... See more...
I don't think that app will handle the Microsoft Graph authentication flow, at least not out of the box. You may be better off writing a script or making a custom modular input in the Splunk add-on builder to get those call records.
Are you sure that the NAS is fast enough to receive and write the logs of your Splunk cluster? You need a lot of bandwidth and IOPS, especially if the NAS is being used by multiple indexers at the sa... See more...
Are you sure that the NAS is fast enough to receive and write the logs of your Splunk cluster? You need a lot of bandwidth and IOPS, especially if the NAS is being used by multiple indexers at the same time.
Hello, I'm curious to know if you were able to successfully migrate from Windows to Linux? I opened a support ticket for help and they referred me to this forum posting, but the steps mentioned here... See more...
Hello, I'm curious to know if you were able to successfully migrate from Windows to Linux? I opened a support ticket for help and they referred me to this forum posting, but the steps mentioned here are not sufficient and some steps seem out of order. For example, it says to copy the home directory from old to new server and then install splunk on new server, but wouldn't this just overwrite the files you copied over? After opening a second ticket with a different Splunk support rep, they suggested I (1) install a default splunk instance on new server (2) copy only the $SPLUNK_HOME\var\lib and $SPLUNK_HOME\etc directories, as well as the directory containing my cold search DBs.  Lastly, they recommended I update my configuration (.conf) files to point to the new locations on the Linux server. However, received no specific guidance on which files to update other than the indexes.conf file.  The final recommendation from Splunk support was to check *all* configuration files in $SPLUNK_HOME\etc directory. When I pointed out that there are over 300 configuration files in our $SPLUNK_HOME\etc directory, they confirmed that we must check and update all 300+ files, which is not feasible for us. At this point I've given up, but maybe someone else on here has had success?
You could use the /services/search/v2/jobs REST endpoint   | rest /services/search/v2/jobs | search label = "SOC - *" | sort - updated | table label updated author ```add fields as desired```
What happens if you run btool on the settings stanza and grep for max_upload_size? e.g. /opt/splunk/bin/splunk btool web list settings | grep max_upload   If it shows a value other than 8000, the... See more...
What happens if you run btool on the settings stanza and grep for max_upload_size? e.g. /opt/splunk/bin/splunk btool web list settings | grep max_upload   If it shows a value other than 8000, then likely your web.conf file is in the wrong place, or being overridden by another.
Thank you for the assistance!
@ITWhisperer Appreciate your help on this but the query doesn't seem to be working.  For count=1 / count =2 I see the events appear in both lookup and indexed events.  
Hi All, I'm trying to build a dashboard that will take input from a dropdown field and perform a search based on the item selected from the dashboard. I have two inputs, one dropdown and one multise... See more...
Hi All, I'm trying to build a dashboard that will take input from a dropdown field and perform a search based on the item selected from the dashboard. I have two inputs, one dropdown and one multiselect.  I am passing two tokens, one $competency$ for dropdown and $sub_competency$ for multiselect.    My token sub_competency is not syncing with the dashboard. I am adding like this | search Sub_Competency="$sub_competency$"     | inputlookup cyber_q1_available_hours.csv | rename "Sub- Competency" as Sub_Competency | search Sub_Competency="$sub_competency$" | eval split_name=split('Resource Name', ",") | eval first_name=mvindex(split_name,1) | eval last_name=mvindex(split_name,0) | eval Resource_Name=trim(first_name) . " " . trim(last_name) | stats count,values(Sub_Competency) as Sub_Competency values(Competency) as Competency values("FWD Looking Util") as FWD_Util values("YTD Util") as YTD_Util by Resource_Name | search Competency="$selected_competency$" | table Resource_Name, Competency, Sub_Competency,FWD_Util,YTD_Util |sort FWD_Util       Need some urgent help on this.  Thanks in advance
Unfortunately, AppDynamics does not support Integrated Windows Authentication as part of the Browser Synthetic Monitoring functionality. See https://docs.appdynamics.com/appd/onprem/24.x/latest/en/en... See more...
Unfortunately, AppDynamics does not support Integrated Windows Authentication as part of the Browser Synthetic Monitoring functionality. See https://docs.appdynamics.com/appd/onprem/24.x/latest/en/end-user-monitoring/synthetic-monitoring/browser-synthetic-monitoring Depending on the application, there may be work arounds, if IWA is the only option for MFA, there's not a good answer right now. Feel free to open an Idea ticket and post the link here so I can support the entry.
Sorry, I thought that was obvious. index=prod_syslogfarm | lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cm... See more...
Sorry, I thought that was obvious. index=prod_syslogfarm | lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cmdb_asset_inventory.csv Reporting_Host as hostname | stats count by Hostname | append [| inputlookup cmdb_asset_inventory.csv | stats count by Hostname] | stats count by Hostname | where count=1 The way it works is to lookup using the ip address, fqdn hostname and host name using data from the events, then gets a list of Hostnames that have matched the lookup. Next append a list of hostnames from the lookup file. Now when you count the hostnames, when the count is 1 they only appear in the lookup not to events (which would have hostname counts of 2).
@ITWhisperer Could you explain how this works ? do I need to append this to my original query ? I don't see the syslog_farm index used anywhere in your search query.
As we always say in this forum, illustration of raw input (in text format) is critical for the question to be answerable.  Thank you for finally getting to data.  My previous answer was based on Kend... See more...
As we always say in this forum, illustration of raw input (in text format) is critical for the question to be answerable.  Thank you for finally getting to data.  My previous answer was based on KendallW's emulation.  This latest illustration is not only different from that emulation, but also different from your initial screenshot.  One fundamental difference is that this data includes multiple days potentially in the future.  It seems that the input is from a prediction of sorts. This said, I also realized that JSON keys themselves can be utilized to simply solution if you are using Splunk 8.1 or later.  Again, regex is NOT the correct tool for structured data. Here is the code you can try:   | eval today = strftime(now(), "%F"), tomorrow = strftime(relative_time(now(), "+1d"), "%F") | eval today = json_extract(_raw, "result.watt_hours_day." . today) | eval tomorrow = json_extract(_raw, "result.watt_hours_day." . tomorrow)   Here is an emulation for you to play with and compare with real data.  Because your illustrated data is way in the past, I randomly pick 2019-06-26 as search time and establishes a "fake_now" field instead of using now() function. (As a result, "tomorrow" corresponds to 2019-06-27.)   | makeresults | eval _raw="{ \"result\": { \"watts\": { \"2019-06-22 05:15:00\": 17, \"2019-06-22 05:30:00\": 22, \"2019-06-22 05:45:00\": 27, \"2019-06-29 20:15:00\": 14, \"2019-06-29 20:30:00\": 11, \"2019-06-29 20:45:00\": 7 }, \"watt_hours\": { \"2019-06-22 05:15:00\": 0, \"2019-06-22 05:30:00\": 6, \"2019-06-22 05:45:00\": 12, \"2019-06-29 20:15:00\": 2545, \"2019-06-29 20:30:00\": 2548, \"2019-06-29 20:45:00\": 2550 }, \"watt_hours_day\": { \"2019-06-22\": 2626, \"2019-06-23\": 2918, \"2019-06-24\": 2526, \"2019-06-25\": 2866, \"2019-06-26\": 2892, \"2019-06-27\": 1900, \"2019-06-28\": 2199, \"2019-06-29\": 2550 } }, \"message\": { \"type\": \"success\", \"code\": 0, \"text\": \"\" } }" | spath | eval fake_now = strptime("2019-06-26 18:15:06", "%F %T") | eval today = strftime(fake_now, "%F"), tomorrow = strftime(relative_time(fake_now, "+1d"), "%F") | eval today = json_extract(_raw, "result.watt_hours_day." . today) | eval tomorrow = json_extract(_raw, "result.watt_hours_day." . tomorrow) | fields result.watt_hours_day.2019-06-26 result.watt_hours_day.2019-06-27 today tomorrow   Output is today tomorrow result.watt_hours_day.2019-06-26 result.watt_hours_day.2019-06-27 _raw 2892 1900 2892 1900 { "result": { "watts": { "2019-06-22 05:15:00": 17, "2019-06-22 05:30:00": 22, "2019-06-22 05:45:00": 27, "2019-06-29 20:15:00": 14, "2019-06-29 20:30:00": 11, "2019-06-29 20:45:00": 7 }, "watt_hours": { "2019-06-22 05:15:00": 0, "2019-06-22 05:30:00": 6, "2019-06-22 05:45:00": 12, "2019-06-29 20:15:00": 2545, "2019-06-29 20:30:00": 2548, "2019-06-29 20:45:00": 2550 }, "watt_hours_day": { "2019-06-22": 2626, "2019-06-23": 2918, "2019-06-24": 2526, "2019-06-25": 2866, "2019-06-26": 2892, "2019-06-27": 1900, "2019-06-28": 2199, "2019-06-29": 2550 } }, "message": { "type": "success", "code": 0, "text": "" } }
Hi @BRFZ , don't use the inputs to select during installation that are enabled in $SPLUNK_HOME\system\local and aren't manageable by Deployment Server. It's better to don't enable these inputs and ... See more...
Hi @BRFZ , don't use the inputs to select during installation that are enabled in $SPLUNK_HOME\system\local and aren't manageable by Deployment Server. It's better to don't enable these inputs and install (manually or by Deployment Server) the Splunk_TA_windows, remembering to enable inputs. In this way, you can also define the index in which these logs are store. Anyway, answering to your question, by default they are in the main index. Ciao. Giuseppe
| lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cmdb_asset_inventory.csv Reporting_Host as hostname | stats c... See more...
| lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cmdb_asset_inventory.csv Reporting_Host as hostname | stats count by Hostname | append [| inputlookup cmdb_asset_inventory.csv | stats count by Hostname] | stats count by Hostname | where count=1