The purpose of these dashboards are health checks. We do manually check these dashboards to see if the errors are within the thresholds. If they breach, we check if there is any actual issue going on...
See more...
The purpose of these dashboards are health checks. We do manually check these dashboards to see if the errors are within the thresholds. If they breach, we check if there is any actual issue going on. Though we have alerts configured, we do these checks manually 6 times a day, to ensure stability. we would like to move away manual checks and see for any automation options
Thanks @inventsekar . Apologies if my question was unclear but we do have alerts and dashboards configured. What we want now is automated health check in a simple RGB status which tells status of das...
See more...
Thanks @inventsekar . Apologies if my question was unclear but we do have alerts and dashboards configured. What we want now is automated health check in a simple RGB status which tells status of dashboards and alerts.
Hi @sameerdeepu2000 ... you can create Splunk Alerts easily.. let me give you an example... 1) simply run a splunk search query (index=User_Custom_Index username=testUser | stats count by usernam...
See more...
Hi @sameerdeepu2000 ... you can create Splunk Alerts easily.. let me give you an example... 1) simply run a splunk search query (index=User_Custom_Index username=testUser | stats count by username) 2) you can save that search query as as alert ..... ( just above the time-picker... "Save As"..choose "Alert" in the drop-down) 3) Splunk Alert gives you options to send email alerts.. for example.. if the count by a user is above 10, you can send email alert to your team DL. pls find doc link... https://docs.splunk.com/Documentation/Splunk/latest/Alert/Aboutalerts
Hi @KR1, sorry but I don't understand your issue: is your issue on the columns or on the displayed values (as I can understand from your screenshots)? if on the columns, is your issue that you wan...
See more...
Hi @KR1, sorry but I don't understand your issue: is your issue on the columns or on the displayed values (as I can understand from your screenshots)? if on the columns, is your issue that you want the columns in a specified order (which order?), or you don't want to display empty columns? Ciao. Giuseppe
Hi @Arty .. >> an application to send SOAR files to a server? to a "server", meaning outside regular user server ?!?! is it part of the SOAR playbook tasks, you mean?
Hi @dgwann, in the dashboard, as I said, you should create an input in DB-Connect, indexing results in Splunk and using my first solution in dashboards. Ciao. Giuseppe
>>> Is possible to develop Apps outside SOAR with IDE like Visual Studio using which language?.. python? or something else? may i know if you have done any Splunk Apps development for Splunk Ent...
See more...
>>> Is possible to develop Apps outside SOAR with IDE like Visual Studio using which language?.. python? or something else? may i know if you have done any Splunk Apps development for Splunk Enterprise?.. (as you are a new member)
Hi All, I am from an application production support team and we use splunk as our monitoring tool along with other tools. We use splunk primarily to get an understanding of the user actions via log...
See more...
Hi All, I am from an application production support team and we use splunk as our monitoring tool along with other tools. We use splunk primarily to get an understanding of the user actions via logs. We built some traditional dashboards and alerts to enhance our monitoring. We do our application health checks which include manually looking at splunk dashboards to see any spike in errors. I would like to automate this step where we check the dashboards and report them if there are any queries on dashboards that are trending red. Preferably post a RGB status on teams chat / email Any leads on how to build this solution is much appreciated.
Below is the query one should use:
[| makeresults
| eval earliest=(floor(now()/600))*600-600,
latest=(floor(now()/600))*600
]
| search index=test ....
because the sub search is being executed fir...
See more...
Below is the query one should use:
[| makeresults
| eval earliest=(floor(now()/600))*600-600,
latest=(floor(now()/600))*600
]
| search index=test ....
because the sub search is being executed first the query becomes:
earliest=1234 latest=5678 index=test ...
if one tries to rename earliest or latest with a different names (like my_early_time, my_latest_time) there will be no result as the query will be:
my_early_time=1234 my_latest_time=5678 index=test ...
@richgalloway Even though the https://www.glassbox.com/platform/integrations/ shows so, I can't find any relevant documentation to extract/ export the logs/events from a user session in glassbox. If ...
See more...
@richgalloway Even though the https://www.glassbox.com/platform/integrations/ shows so, I can't find any relevant documentation to extract/ export the logs/events from a user session in glassbox. If you could help me with these steps, it would be greatly appreciated. thank you
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk C...
See more...
There are a few ways to onboard data into Splunk. Install a universal forwarder on the server to send log files to Splunk Have the server send syslog data to Splunk via a syslog server or Splunk Connect for Syslog Use the server's API to extract data for indexing Use Splunk DB Connect to pull data from the server's SQL database. Have the application send data directly to Splunk using HTTP Event Collector (HEC). This site looks like it has potential: https://www.glassbox.com/platform/integrations/
Does anyone know how to integrate glassbox session events with splunk? By the way, there is no option to export glassbox session events alone but we can we see all the events in expert view section i...
See more...
Does anyone know how to integrate glassbox session events with splunk? By the way, there is no option to export glassbox session events alone but we can we see all the events in expert view section inside the session. Or else is there a way to export these events from glassbox session as json/text format?
In a pretty fresh 9.1.1 Splunk install (only Dashboard Examples has been added), if I tried to run the search in the Accepted Solution you linked to I wasn't getting any results because the app and v...
See more...
In a pretty fresh 9.1.1 Splunk install (only Dashboard Examples has been added), if I tried to run the search in the Accepted Solution you linked to I wasn't getting any results because the app and view fields weren't getting extracted in the Search app context in Verbose Mode. Not sure if something has changed in the field extractions on _internal since that was posted in 2019. If I add a rex command to force the field extract, it seems to be picking up all of the hits to splunkweb to track app usage: index=_internal sourcetype="splunk_web_access" method="GET" status="200" user!=-
| rex field=uri_path "\/app\/(?<app>[^\/]+)\/(?<view>[^\/\s]+)"
| stats count latest(_time) as ViewTime by user app view
| sort -count
| eventstats sum(count) as countByApp list(view) as view list(count) as count list(ViewTime) as ViewTime by user app
| convert timeformat="%a %m/%d/%Y %I:%M:%S %p" ctime(ViewTime)
| dedup app
| appendpipe [stats sum(count) as count by user | eval view = "Total Views"]
| sort + user -countByApp Note: There are other things happening in splunkweb that you don't have to count - images fetches, calls to the manager endpoints, etc. You can see them here: index=_internal sourcetype="splunk_web_access" method="GET" status="200" user!=-
| rex field=uri_path "\/app\/(?<app>[^\/]+)\/(?<view>[^\/\s]+)"
| search NOT app=*
Good, that you solved the issue.. its an interesting issue.. the floor command "was" working fine some years ago and now it seems something wrong.. i tried using eval and floor.. it gives a complai...
See more...
Good, that you solved the issue.. its an interesting issue.. the floor command "was" working fine some years ago and now it seems something wrong.. i tried using eval and floor.. it gives a complaint that right side floor output can not be assigned to left side variable.. use bool command inside the if loop along with floor command. i tried but no luck. i checked the eval's man page... but no luck. something wrong with eval command. i tried the subsearch and return logic as well.. but no luck. ok, its good that u r able to solve this issue now.. also please post your final SPL query also.. for everyone's learning. thanks.
From the end useother=f is an option to the timechart command which along with the "limit" parameter changes how the timechart works. The timechart command has a default limit of 10 (or is it 20?...
See more...
From the end useother=f is an option to the timechart command which along with the "limit" parameter changes how the timechart works. The timechart command has a default limit of 10 (or is it 20?) series. So if you do a timechart by some field, it will generate separate time series only up to the limit. All remaining data will (or will not, depending on whether the useother parameter is set to true or false)z be aggregated into a single series called "other". Seting limit to 0 causes timechart to generate separate series for each value of the field you're splitting your data by regardless of its cardinality. And your main problem is - on the one hand relatively easy because your Splunk instance has some license limitations (I asume that if you have 3 separate sites they didn't go for workload-based licensing) so you have the upper limit of daily ingested data. Unfortunately, it's never that simple. 1. Noone said that your ingestion will be symmetrical. At least you didn't say so. So it might skew the distribution, depending on the replication settings. 2. Raw data is one thing but there might be additional factors - how well is your data compressing? (That you can calculats from the dbinspect outout) Do you use many indexed fields? Do you use datamodel acceleration? If so, are the accelerated summaries stored on the sme storage as the raw data buckets or on another volume? Do you use volumes? Do the volumes have size limits which could be reached? Generally speaking you'd need to process the dbinspect output and/or introspection rest endpoints output. Doing that over a cluster is generally not that different from running it on your all-in-one lab under your desk with the exception of possible multiple copies of the same bucket spread across indexers and possible replicated but not searchable buckets. Oh, and BTW, if I see that there are "hundreds" of indexes I begin to wonder what is the reason. Typical two reasons for splitting data into separate indexes is access control (you grant access on a per-index basis) and retention settings. There is also an issue with cardinality of your data so you might want to separate sources that log 10 events per day from those logging several millions daily. But that's it.