All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @kmjefferson42 , this is my usual homepage in my apps: I usually use titles and images, but you can use only titles: <dashboard version="1.1"> <label>Home Page</label> <row> <panel> ... See more...
Hi @kmjefferson42 , this is my usual homepage in my apps: I usually use titles and images, but you can use only titles: <dashboard version="1.1"> <label>Home Page</label> <row> <panel> <html> <h3>This is an external link</h3> <a href="http://www.garanteprivacy.it"> <i> <b> <u> <strong>text description</strong> </u> </b> </i> </a> </html> </panel> <panel> <html> <div style="width:100%;height:100%;text-align:center;"> <a href="http://www.garanteprivacy.it"> <img src="/static/app/cp_fp_coba/GarantePrivacy.png" style="height:80px;border:0;"/> </a> </div> </html> </panel> </row> <row> <panel> <html> <h1>box1</h1> <p> <font size="2">Description</font> </p> <table border="0" cellpadding="10" align="center"> <th> <tr> <td align="center"> <a href="your_dashboard_1"> <img src="/static/app/your_app/your_icon.png" style="width:80px;border:0;"/> </a> </td> <td align="center"> <a href="your_dashboard_1"> <img src="/static/app/your_app/your_icon.png" style="width:80px;border:0;"/> </a> </td> <td align="center"> <a href="your_dashboard_2"> <img src="/static/app/your_app/your_icon_2.png" style="width:80px;border:0;"/> </a> </td> </tr> <tr> <td align="center"> <a href="your_dashboard_1"> your dashboard 1 title </a> </td> <td align="center"> <a href="your_dashboard_1"> </a> </td> <td align="center"> <a href="your_dashboard_3"> your dashboard 2 title </a> </td> </tr> </th> </table> </html> </panel> </row> </dashboard> search in the Community: there are many answers to your questions, also from me. Ciao. Giuseppe
@kmjefferson42  Hi Ken! What you're looking to do is possible in Simple XML Dashboards (don't know about Studio). Interesting that you already have this, but can't reverse engineer it? That might req... See more...
@kmjefferson42  Hi Ken! What you're looking to do is possible in Simple XML Dashboards (don't know about Studio). Interesting that you already have this, but can't reverse engineer it? That might require more details in a different question. To answer the question you asked, please find a run-anywhere example below. To use it, create a new classic (simple xml) dashboard, click "source" to edit the xml code, delete the existing lines, and paste the below example in. This will give you the bones to creating all the panels you want with hyperlinks. <dashboard version="1.1"> <label>dashboard panel for links</label> <row> <panel> <html> <h2>Panel 1</h2> <p> <ul> <li>This is a bulleted list of notes. Copy and paste this line of html for each bullet needed</li> <li><a href="https://www.splunk.com/">Replace this URL with your search, report, or dashboard URL</a></li> </ul> </p> </html> </panel> <panel> <html> <h2>Panel 2</h2> <p> <ul> <li>Same content below. Showing multiple panels are possible on the same row. Season to taste.</li> <li><a href="https://www.splunk.com/">Replace this URL with your search, report, or dashboard URL</a></li> </ul> </p> </html> </panel> </row> </dashboard>
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qrad... See more...
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qradar they have double hostname and timestamp (date) i tried to define syslogSourcetype = sourcetype::<sourcetype>    but same occuers they are double    is there a way to solve this problem please i'm trying now for 1 week to solve this issue   Thanks
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qrad... See more...
i have faced problem with Qradar and transformation of log (Trend micro)   i forwarded the log as a raw format from splunk HF to Qradar    i'm facing problem with the header of the events on Qradar they have double hostname and timestamp (date) i tried to define syslogSourcetype = sourcetype::<sourcetype>    but same occuers they are double    is there a way to solve this problem please i'm trying now for 1 week to solve this issue   Thanks
Good Morning All,  Curious if anyone out here knows of a way to make a dashboard with clickable text, like a hyperlink in html, that opens a url to search results. I have included a picture of what ... See more...
Good Morning All,  Curious if anyone out here knows of a way to make a dashboard with clickable text, like a hyperlink in html, that opens a url to search results. I have included a picture of what I attempting to accomplish. The dashboard pictured was made with Splunk Enterprise 7.1.10. I am unable to export and import as I have done in the past with earlier  versions.  I welcome any and all ideas! Much thx, Ken
Starting 9.2.0 release internal metrics log event generation can be controlled by group or subgroup.  If there are thousands of forwarders,  _internal index becomes the most active index and generate... See more...
Starting 9.2.0 release internal metrics log event generation can be controlled by group or subgroup.  If there are thousands of forwarders,  _internal index becomes the most active index and generates lot of hot buckets. 9.2.0 provides ability to control each metrics group/subgroup. checkout `interval` https://docs.splunk.com/Documentation/Splunk/latest/Admin/limitsconf for more details. Upon splunk start, metrics.log will log all the controllable metrics groups/subgroups.  Example metrics.log.     06-08-2024 03:14:49.659 +0000 INFO Metrics - Will log metrics_module=dutycycle:ingest at metrics_interval=30.000.     metrics_module is the controllable module logged in metrics.log. dutycycle:ingest is the the controllable module. dutycycle is the metrics group name and ingest is the subgroup name. It's default logging interval is 30 sec.     06-08-2024 03:14:49.703 +0000 INFO Metrics - Will log metrics_module=tailingprocessor:tailreader0 at metrics_interval=60.000.     tailingprocessor is the group name and tailreader0 is the subgroup name( where trailing `0` is the first pipeline number). It's default logging interval is 60 sec. New metrics logging framework has global default metrics logging interval 60 sec in limits.conf with exception for some modules(30 sec) you will find in metrics.log       [metrics] interval = <integer> * Number of seconds between logging splunkd metrics to metrics.log. * Minimum of 10. * Default (Splunk Enterprise): 60 * Default (Splunk Universal Forwarder): 60     There are so many modules you will find in metrics.log that are never queried. For example queue and thruput metrics is probably the most queried metrics but not necessarily others. You can increase global default to 120 second     [metrics] interval=120     Customize logging interval for other very critical metrics groups. Example there are various `queue` metrics logging. Some are always checked, some rarely.     06-08-2024 03:14:49.603 +0000 INFO Metrics - Will log metrics_module=queue:parsingqueue at metrics_interval=30.000. 06-08-2024 03:14:49.663 +0000 INFO Metrics - Will log metrics_module=queue:httpinputq at metrics_interval=30.000. 06-08-2024 03:14:49.651 +0000 INFO Metrics - Will log metrics_module=queue:stashparsing at metrics_interval=30.000. 06-08-2024 03:14:49.603 +0000 INFO Metrics - Will log metrics_module=queue:teequeue at metrics_interval=30.000.      You can set global default for queue group.     [queue] interval=60     parsing 30 sec.     [queue:parsingqueue] interval=30     stashparsing 150 sec.     [queue:stashparsing] interval=150       Interval can be set for [<group>] or [<group>:<subgroup>].
Hello everyone, I'm new to Splunk, can anyone help me: enable "Using visualizations to determine TTP coverage" from https://lantern.splunk.com/?title=Security%2FUCE%2FGuided_Insights%2FCyber_framewor... See more...
Hello everyone, I'm new to Splunk, can anyone help me: enable "Using visualizations to determine TTP coverage" from https://lantern.splunk.com/?title=Security%2FUCE%2FGuided_Insights%2FCyber_frameworks%2FAssessing_and_expanding_MITRE_ATT%26CK_coverage_in_Splunk_Enterprise_Security# ? https://docs.splunk.com/Documentation/ES/7.1.0/RBA/ViewMitreMatrixforRiskNotable#View_the_MITRE_ATT.... Splunk Enterprise Security Splunk Security Essentials 
I forgot to mention in terms of pre-reqs:  1. Newrelic should have some way of using API calls, you can use Splunk Tokens for API use and as a way of authentication  - see below link for info  ht... See more...
I forgot to mention in terms of pre-reqs:  1. Newrelic should have some way of using API calls, you can use Splunk Tokens for API use and as a way of authentication  - see below link for info  https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/CreateAuthTokens  
Hi @shocko  yes precisely that (the code is then in another separate custom add-on TA) that you control for custom code changes, it lives side by side with the Splunkbase TA and this is optional,  so... See more...
Hi @shocko  yes precisely that (the code is then in another separate custom add-on TA) that you control for custom code changes, it lives side by side with the Splunkbase TA and this is optional,  so example, App called my_windows_sidecar_ta and add local there and push out, but you need to know the app structure for this some people do it others don't,  main thing is local. example of creating your own TA  https://dev.splunk.com/enterprise/tutorials/quickstart_old/createyourfirstapp/        
Thanks @deepakc , good call on use of the /local folder for this. when you say: Another way is to create your own side car TA and have the code there and run it alongside the Windows TA. What do yo... See more...
Thanks @deepakc , good call on use of the /local folder for this. when you say: Another way is to create your own side car TA and have the code there and run it alongside the Windows TA. What do you mean by sidecar? Is this simply buidling a new app for the indexer?  
Thanks for your help, it's work
Hi @schose  Could you please provide the splunk query that is used to check before and after sizes of bucket. Thank you.
These would come to mind first - there's plenty more, you can explore the others and use them as you as you see fit.  1. Check the overall health /services/cluster/manager/health 2. Check Cluster ... See more...
These would come to mind first - there's plenty more, you can explore the others and use them as you as you see fit.  1. Check the overall health /services/cluster/manager/health 2. Check Cluster Status of the peers (Indexers /services/cluster/manager/peers 3. Check the indexing status /services/cluster/manager/indexes 4. Check the Replication and Search Factor status /services/cluster/manager/status You can also check the CM's resources (CPU/MEM etc) 5. Check Resource Utilisation on the CM /services/server/status/resource-usage/hostwide  
Hello deepakc Thank you for your immediate reply! Do you have any prerequisites or concerns when implementing monitoring of that API endpoint?
Hello, I am using Splunk Cloud, for some our sourcetypes we have defined specific TRUNCATE values. I have a couple of questions. If `TRUNCATE` value is not defined for a sourcetype, what is the def... See more...
Hello, I am using Splunk Cloud, for some our sourcetypes we have defined specific TRUNCATE values. I have a couple of questions. If `TRUNCATE` value is not defined for a sourcetype, what is the default limit of chars? Is there any guideline document or rules on how to define TRUNCATE? Especially if it is recommended to set something higher than 50k or 80k chars as a limit.
Something like | rest /services/saved/searches | where match(search, "\bindex *= *(\* |\*$)") OR NOT match(search, "\bindex *=") | fields title search
This will get you the saved searches | rest "/servicesNS/-/-/saved/searches" splunk_server=local and it will return a field called 'search' - you can then look in that to see what search statement... See more...
This will get you the saved searches | rest "/servicesNS/-/-/saved/searches" splunk_server=local and it will return a field called 'search' - you can then look in that to see what search statements are being used. Note that if your search contains macros you will also have to expand the macros too and search those.  
That sort of response is unlikely to elicit further help from anyone. Please describe what you have done, and if possible post snippets of your token management logic in your dashboard
You won't really be able to rename the fields unless you transpose data, which is probably not the right approach in your use case. Here are a couple of other examples to give you ways to manipulate... See more...
You won't really be able to rename the fields unless you transpose data, which is probably not the right approach in your use case. Here are a couple of other examples to give you ways to manipulate data This one gets the result object and sorts the dates to make sure they are in date order | eval dates=mvsort(json_array_to_mv(json_extract(_raw, "result"))) | eval result_today=replace(mvindex(dates, 0), "[^:]*:\s*(\d+)\}", "\1") | eval result_tomorrow=replace(mvindex(dates, 1), "[^:]*:\s*(\d+)\}", "\1") This one extracts the fields and then uses the wildcarding technique with foreach to make the field assignments. | spath | foreach result{}.* [ eval result_today=if("<<MATCHSTR>>"=strftime(_time, "%F"), '<<FIELD>>', result_today), result_tomorrow=if("<<MATCHSTR>>"=strftime(_time, "%F"), result_tomorrow, '<<FIELD>>')]  
- I've encountered the same issue before. - You can resolve it by following these steps: - Navigate to "Settings" - Click on "Data Inputs" Within "Data Inputs," you'll find two sections: ... See more...
- I've encountered the same issue before. - You can resolve it by following these steps: - Navigate to "Settings" - Click on "Data Inputs" Within "Data Inputs," you'll find two sections: - "Local inputs" - "Forwarded inputs" - Choose "Forwarded Inputs" - Select "Windows Event Logs" - To add a new configuration, click on the "+ Add new" option next to "Windows Event Logs". - If you don't see any "Available hosts" at the first "Select Forwarders" stage, try refreshing the page 5-6 times or go back and try adding new again.