All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think this is happen because its run on Windows I will try to install it on Linux first and i will let you know if the problem fixed thanks for the help dude
Please share more details
I´m probably very slow on the uptake from my side.  But supposedly I was to get a link for Splunk cloud by mail for my google security certificate.   But I did not get a direct link and neither do I ... See more...
I´m probably very slow on the uptake from my side.  But supposedly I was to get a link for Splunk cloud by mail for my google security certificate.   But I did not get a direct link and neither do I seem to find any acces after logging in on splunk.com
If I understand correctly, you would like a dashboard to have a table where clicking on any column except one does nothing, and then clicking on the one special column will cause another table to be ... See more...
If I understand correctly, you would like a dashboard to have a table where clicking on any column except one does nothing, and then clicking on the one special column will cause another table to be displayed? If so, it is possible to do this by adding an Interaction and setting a token to "name" (getting the column name), then setting a requirement in the search of the other table so that the token must equal that name value. Under the Visibility settings of the other table, check the box that says "When data is unavailable, hide element". More on this method: https://www.splunk.com/en_us/blog/tips-and-tricks/dashboard-studio-how-to-configure-show-hide-and-token-eval-in-dashboard-studio.html   And here is example JSON code for a dashboard where you need to click the "source" column to make the second table appear: { "visualizations": { "viz_m5IbYYDW": { "type": "splunk.table", "dataSources": { "primary": "ds_ePMHur2X" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "test", "key": "name" } ] } } ], "title": "$test$" }, "viz_ppdmDf4r": { "type": "splunk.table", "dataSources": { "primary": "ds_iMzJA85U_ds_ePMHur2X" }, "eventHandlers": [ { "type": "drilldown.setToken", "options": { "tokens": [ { "token": "test", "key": "name" } ] } } ], "title": "$test$", "hideWhenNoData": true } }, "dataSources": { "ds_ePMHur2X": { "type": "ds.search", "options": { "query": "index=*\n| head 10\n| table _time host source sourcetype ", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_1" }, "ds_iMzJA85U_ds_ePMHur2X": { "type": "ds.search", "options": { "query": "index=*\n| head 10\n| table _time host source sourcetype \n| head limit=100 ($test|s$ = \"source\" )", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Search_1 copy 1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": {}, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_m5IbYYDW", "type": "block", "position": { "x": 0, "y": 0, "w": 720, "h": 400 } }, { "item": "viz_ppdmDf4r", "type": "block", "position": { "x": 720, "y": 0, "w": 720, "h": 400 } } ], "globalInputs": [] }, "description": "", "title": "Make another table visible by clicking on a column" }
I think you'll find the contents of the multi-select token are a multi-value field.  That means any place you use $sub_competency$ must make sense with a multi-value field.  Perhaps | search Sub_Com... See more...
I think you'll find the contents of the multi-select token are a multi-value field.  That means any place you use $sub_competency$ must make sense with a multi-value field.  Perhaps | search Sub_Competency IN ("$sub_competency$") would work better. As @marnall suggested, it depends on what the token contents look like.
Something like this?   | eventstats range(count) as varies by HOST | where varies > 0   Here is an emulation you can play with and compare with real data. (I know that # is not a real field.  It ... See more...
Something like this?   | eventstats range(count) as varies by HOST | where varies > 0   Here is an emulation you can play with and compare with real data. (I know that # is not a real field.  It doesn't affect calculation here.)   | makeresults format=csv data="#,HOST,BGP_NEIGHBOR,BGP_STATUS,count 1,Router A,neighbor 10.1.1.1,Down,1 2,Router A,neighbor 10.1.1.1,Up,1 3,Router B,neighbor 10.2.2.2,Down,1 4,Router B,neighbor 10.2.2.2,Up,1 5,Router C,neighbor 10.3.3.3,Down,2 6,Router C,neighbor 10.3.3.3,Up,1 7,Router D,neighbor 10.4.4.4,Down,2 8,Router D,neighbor 10.4.4.4,Up,2" ``` the above emulates ..... | rex field=_raw "(?<BGP_NEIGHBOR>neighbor\s\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | rex field=_raw "(?<BGP_STATUS>(Up|Down))" | stats count by HOST, BGP_NEIGHBOR, BGP_STATUS ```   Combining this with the above search gives # BGP_NEIGHBOR BGP_STATUS HOST count varies 5 neighbor 10.3.3.3 Down Router C 2 1 6 neighbor 10.3.3.3 Up Router C 1 1
Can you put the tokens into the dashboard titles, including the dollar signs? They will be replaced with the current value of the input, which is helpful for debugging that they are not a wrong value... See more...
Can you put the tokens into the dashboard titles, including the dollar signs? They will be replaced with the current value of the input, which is helpful for debugging that they are not a wrong value. Alternatively you could post the source code of your inputs and search panel from your dashboard, so that we can see if there is a problem with them. (Be sure to censor any sensitive keywords in your source code)
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only li... See more...
Here is my query for checking BGP routing that goes UP and DOWN. (I only want to see when the amount of UP and DOWN are not equal for the same Neighbor on a router) In my case i want to show only line #5 and #6. How do i do that ?    My query: ...... | rex field=_raw "(?<BGP_NEIGHBOR>neighbor\s\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"  |  rex field=_raw "(?<BGP_STATUS>(Up|Down))"  |  stats count by HOST, BGP_NEIGHBOR, BGP_STATUS     #     HOST               BGP_NEIGHBOR       BGP_STATUS       count   1     Router A          neighbor 10.1.1.1          Down                    1 2     Router A          neighbor 10.1.1.1          Up                          1   3     Router B          neighbor 10.2.2.2          Down                   1 4     Router B          neighbor 10.2.2.2          Up                         1   5     Router C          neighbor 10.3.3.3         Down                    2 6     Router C          neighbor 10.3.3.3         Up                          1   7     Router D          neighbor 10.4.4.4         Down                   2 8     Router D          neighbor 10.4.4.4         Up                         2    
So the action works when you run it ad-hoc on a container, but not from within a playbook? If so, could you try making the action name shorter?
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests ... See more...
Query1: |tstats count as Requests sum(attributes.ResponseTime) as TotalResponseTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotResTime=TotalResponseTime/Requests |fields TotResTime  Query2: |tstats count as Requests sum(attributes.latencyTime) as TotalatcyTime where index=app-index NOT attributes.uriPath("/", null, "/provider") |eval TotlatencyTime=TotalatcyTime/Requests |fields TotlatencyTime We want to combine these 2 queries and create area chart panel.  how to do this??
I don't think that app will handle the Microsoft Graph authentication flow, at least not out of the box. You may be better off writing a script or making a custom modular input in the Splunk add-on b... See more...
I don't think that app will handle the Microsoft Graph authentication flow, at least not out of the box. You may be better off writing a script or making a custom modular input in the Splunk add-on builder to get those call records.
Are you sure that the NAS is fast enough to receive and write the logs of your Splunk cluster? You need a lot of bandwidth and IOPS, especially if the NAS is being used by multiple indexers at the sa... See more...
Are you sure that the NAS is fast enough to receive and write the logs of your Splunk cluster? You need a lot of bandwidth and IOPS, especially if the NAS is being used by multiple indexers at the same time.
Hello, I'm curious to know if you were able to successfully migrate from Windows to Linux? I opened a support ticket for help and they referred me to this forum posting, but the steps mentioned here... See more...
Hello, I'm curious to know if you were able to successfully migrate from Windows to Linux? I opened a support ticket for help and they referred me to this forum posting, but the steps mentioned here are not sufficient and some steps seem out of order. For example, it says to copy the home directory from old to new server and then install splunk on new server, but wouldn't this just overwrite the files you copied over? After opening a second ticket with a different Splunk support rep, they suggested I (1) install a default splunk instance on new server (2) copy only the $SPLUNK_HOME\var\lib and $SPLUNK_HOME\etc directories, as well as the directory containing my cold search DBs.  Lastly, they recommended I update my configuration (.conf) files to point to the new locations on the Linux server. However, received no specific guidance on which files to update other than the indexes.conf file.  The final recommendation from Splunk support was to check *all* configuration files in $SPLUNK_HOME\etc directory. When I pointed out that there are over 300 configuration files in our $SPLUNK_HOME\etc directory, they confirmed that we must check and update all 300+ files, which is not feasible for us. At this point I've given up, but maybe someone else on here has had success?
You could use the /services/search/v2/jobs REST endpoint   | rest /services/search/v2/jobs | search label = "SOC - *" | sort - updated | table label updated author ```add fields as desired```
What happens if you run btool on the settings stanza and grep for max_upload_size? e.g. /opt/splunk/bin/splunk btool web list settings | grep max_upload   If it shows a value other than 8000, the... See more...
What happens if you run btool on the settings stanza and grep for max_upload_size? e.g. /opt/splunk/bin/splunk btool web list settings | grep max_upload   If it shows a value other than 8000, then likely your web.conf file is in the wrong place, or being overridden by another.
Thank you for the assistance!
@ITWhisperer Appreciate your help on this but the query doesn't seem to be working.  For count=1 / count =2 I see the events appear in both lookup and indexed events.  
Hi All, I'm trying to build a dashboard that will take input from a dropdown field and perform a search based on the item selected from the dashboard. I have two inputs, one dropdown and one multise... See more...
Hi All, I'm trying to build a dashboard that will take input from a dropdown field and perform a search based on the item selected from the dashboard. I have two inputs, one dropdown and one multiselect.  I am passing two tokens, one $competency$ for dropdown and $sub_competency$ for multiselect.    My token sub_competency is not syncing with the dashboard. I am adding like this | search Sub_Competency="$sub_competency$"     | inputlookup cyber_q1_available_hours.csv | rename "Sub- Competency" as Sub_Competency | search Sub_Competency="$sub_competency$" | eval split_name=split('Resource Name', ",") | eval first_name=mvindex(split_name,1) | eval last_name=mvindex(split_name,0) | eval Resource_Name=trim(first_name) . " " . trim(last_name) | stats count,values(Sub_Competency) as Sub_Competency values(Competency) as Competency values("FWD Looking Util") as FWD_Util values("YTD Util") as YTD_Util by Resource_Name | search Competency="$selected_competency$" | table Resource_Name, Competency, Sub_Competency,FWD_Util,YTD_Util |sort FWD_Util       Need some urgent help on this.  Thanks in advance
Unfortunately, AppDynamics does not support Integrated Windows Authentication as part of the Browser Synthetic Monitoring functionality. See https://docs.appdynamics.com/appd/onprem/24.x/latest/en/en... See more...
Unfortunately, AppDynamics does not support Integrated Windows Authentication as part of the Browser Synthetic Monitoring functionality. See https://docs.appdynamics.com/appd/onprem/24.x/latest/en/end-user-monitoring/synthetic-monitoring/browser-synthetic-monitoring Depending on the application, there may be work arounds, if IWA is the only option for MFA, there's not a good answer right now. Feel free to open an Idea ticket and post the link here so I can support the entry.
Sorry, I thought that was obvious. index=prod_syslogfarm | lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cm... See more...
Sorry, I thought that was obvious. index=prod_syslogfarm | lookup cmdb_asset_inventory.csv Reporting_Host as IP_Address | lookup cmdb_asset_inventory.csv Reporting_Host as fqdn_hostname | lookup cmdb_asset_inventory.csv Reporting_Host as hostname | stats count by Hostname | append [| inputlookup cmdb_asset_inventory.csv | stats count by Hostname] | stats count by Hostname | where count=1 The way it works is to lookup using the ip address, fqdn hostname and host name using data from the events, then gets a list of Hostnames that have matched the lookup. Next append a list of hostnames from the lookup file. Now when you count the hostnames, when the count is 1 they only appear in the lookup not to events (which would have hostname counts of 2).