All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This solution is working and im not seeing any warning message now. How is this different from mvzip? May i know why mvzip gives warning if the data is empty?
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion),... See more...
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion), but I would like it to show K, M, G, T (for kilobytes, megabytes, gigabytes, terabytes). Is there a way to adjust this? Thanks!
It is good that you try to illustrate input and desired output.  But you forget to tell us what you are trying to count that should either be 4 or 2?  In other words, you need to explain the logic be... See more...
It is good that you try to illustrate input and desired output.  But you forget to tell us what you are trying to count that should either be 4 or 2?  In other words, you need to explain the logic between input and desired output fully and explicitly. If I take a wild mind reading, you want to count unique number of E-mails related to each type of event.  You want to use distinctcount or dc, not count.   | stats dc(email) as count by event   Here's an emulation of your mock input   | makeresults format=csv data="_raw abc xyz@email.com abc xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com" | rex "(?<event>\w+)\W+(?<email>\S+)" ``` data emulation above ```   The output is event count abc 2 xyz 1
Thankyou for this, it solve the dynamic size issue. Other question appears from this sample: like on OS Description, the value just through the panel. Is it possible or there's option to wrap it?
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced... See more...
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced the same problem or has any ideas on how to solve it. Splunk Enterprise 9.3.1 renders fine in Dashboard Classic, but the issue occurs in Dashboard Studio. コロプレスマップをダッシュボード表示するとペイントされるエリアが崩れます。 Search appで視覚エフェクト表示した際には問題ありません。同じ事象を経験の方や解決の糸口をお持ちの方いらっしゃいませんでしょうか。 Splunk Enterprise 9.3.1ではDashboard Classicでは正常にレンダリングでき、Dashboard Studioでは事象が発生しています。 Choropleth Map in Search app Choropleth Map in Dashboard Studio Choropleth Map in Dashboard Classic Thanks,
Hi, "error" is actually a case where you don't need to index a tag to be able to filter on it. Here is a screen shot of filtering spans where error=true. And here is an example of filtering t... See more...
Hi, "error" is actually a case where you don't need to index a tag to be able to filter on it. Here is a screen shot of filtering spans where error=true. And here is an example of filtering traces that contain errors: PS - The reason it won't allow you to index "error" as a APM metricset is because "error" isn't actually a span tag so there is nothing to index.  
Thank you for letting me know! Unfortunately the workaround didn't fix it, hopefully the next update will. 
You should ensure the tokens always have a value by setting them in an init block - just using an initial / default value is not enough to set the input token to a value.
See TERM() example https://docs.splunk.com/Documentation/Splunk/latest/Search/UseCASEandTERMtomatchphrases
Followup to previous, the SPL below shows status 'dots' in a chart.  I am prepared to use it if I can't find a pie slice coloring that will work for me. | makeresults | eval dbs = "[{\"entity\"... See more...
Followup to previous, the SPL below shows status 'dots' in a chart.  I am prepared to use it if I can't find a pie slice coloring that will work for me. | makeresults | eval dbs = "[{\"entity\":\"I0\",\"instanceCount\":\"0\",\"instanceMax\":\"3\"},{\"entity\":\"I1\",\"instanceCount\":\"1\",\"instanceMax\":\"3\"},{\"entity\":\"I2\",\"instanceCount\":\"2\",\"instanceMax\":\"3\"},{\"entity\":\"I3\",\"instanceCount\":\"3\",\"instanceMax\":\"3\"}]" | spath input=dbs path={} output=dblist | mvexpand dblist | spath input=dblist | eval pct_avail=round(100*instanceCount/instanceMax,1) | eval status=case(pct_avail=100, "🟢", pct_avail>=50, "🟡️", pct_avail>1 , "🟠" ,true(), " ") | table _time entity instanceCount instanceMax pct_avail status
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen a... See more...
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen are ranked by count of some category, and I want to rank by status.  In the example below, I have 4 groups of services, each with a number of service instances providing service up to a maximum number defined for the group.  I would like to visually see a group NofM colored by status and not ranked by count. Any ideas on where to go?  The pie chart viz is ruled out per the above (I think).  I looked for other visualizations such as the starburst,  but it didn't present the way I wanted to. Example SPL: | makeresults | eval dbs = "[{\"entity\":\"I0\",\"instanceCount\":\"0\",\"instanceMax\":\"3\"},{\"entity\":\"I1\",\"instanceCount\":\"1\",\"instanceMax\":\"3\"},{\"entity\":\"I2\",\"instanceCount\":\"2\",\"instanceMax\":\"3\"},{\"entity\":\"I3\",\"instanceCount\":\"3\",\"instanceMax\":\"3\"}]" | spath input=dbs path={} output=dblist | mvexpand dblist | spath input=dblist | eval pct_avail=round(100*instanceCount/instanceMax,1) | eval status=case(pct_avail=100, "OK", pct_avail>=50, "Warning", pct_avail>1, "Major", true(), "Critical") | eval color=case( status="Critical", "#FF0000", status="Major", "#D94E17", status="Warning", "#CBA700", status="OK", "#118832", true(), "#1182F3" ) | stats count by entity  
While the admin might have asked if that's really what you want, that's more of an architect's job to design your indexes properly. As for splitting the data - there are usually two main reasons for... See more...
While the admin might have asked if that's really what you want, that's more of an architect's job to design your indexes properly. As for splitting the data - there are usually two main reasons for splitting data into separate indexes - retention parameters and access restrictions. Just because the servers are db servers or just because they are dev servers doesn't mean that you need separate indexes. You might though if separate teams need access to logs from dev/testing/prod environments or if you need to keep data from dev for a month but from prod for two years. A good architect will also try to find out if there is a chance of a need for such differentiation in forseeable future. Another thing that could warrant separate indexes is huge difference in volume of data between sources. But all those things need to be considered on a per-case basiss. There is no one-size-fits-all solution saying how you should split your data between indexes.
From a cursive search, this seems to be an error associated with Palo Alto Firewalls. If this is an error message generated by a Palo Alto firewall, then you will likely find more relevant informatio... See more...
From a cursive search, this seems to be an error associated with Palo Alto Firewalls. If this is an error message generated by a Palo Alto firewall, then you will likely find more relevant information on the Palo Alto docs or support forum.
You can export the results of the scan in JSON format, then look inside for the individual checks and their results. Find entries with "Result":"BLOCKER", as the messages should indicate why the app ... See more...
You can export the results of the scan in JSON format, then look inside for the individual checks and their results. Find entries with "Result":"BLOCKER", as the messages should indicate why the app is failing the check, and should include the problematic file path. I use Notepad++ with the JStools extension to JSFormat and make the json file readable.  
Are you able to open the file in a text editor, then copy the content to a new file, then save it as .csv? Perhaps there is a hidden part of the file which is causing issues. You could also try openi... See more...
Are you able to open the file in a text editor, then copy the content to a new file, then save it as .csv? Perhaps there is a hidden part of the file which is causing issues. You could also try opening the lookup with the lookup editor app and then pasting the contents into the lookup editor interface, assuming the file is not too big. Some other troubleshooting steps: 1. Can you upload other CSV files? 2. Can you truncate the problem file to a smaller csv file and try uploading it? 3. Can you try saving the .CSV file using a different text processor?
Could you post the sanitized source code of your dashboard or inputs? It sounds like you are doing the right thing in using a dynamically populated 4th LOV, but maybe something isn't set right if the... See more...
Could you post the sanitized source code of your dashboard or inputs? It sounds like you are doing the right thing in using a dynamically populated 4th LOV, but maybe something isn't set right if the search is running but not populating the final LOV.
That is how I understood it should work.  However, when I create the list input and go to the Dynamic Option and create the query...it returns nothing...it just says "Populating" so I know the query ... See more...
That is how I understood it should work.  However, when I create the list input and go to the Dynamic Option and create the query...it returns nothing...it just says "Populating" so I know the query ran.  If I click on the link to run the query in a search window, I can see results.
Dropdown inputs can be set by a static list or a dynamic list or a combination of both. For a dynamic list, the search should return unique values for the list. So, in your instance, this could be fi... See more...
Dropdown inputs can be set by a static list or a dynamic list or a combination of both. For a dynamic list, the search should return unique values for the list. So, in your instance, this could be filtering from a lookup table which holds all your servers, or by an index search to pull all the servers from your logs (although you would probably still need a lookup to get the other fields).
What sort of logs do you have from your computers? Ideally you can identify a log that is only produced when the computer is online, then you could search for that log using a time selector, and it w... See more...
What sort of logs do you have from your computers? Ideally you can identify a log that is only produced when the computer is online, then you could search for that log using a time selector, and it would show which computers are online in that time.
Splunk is go for reporting on what is in the logs, it is not so good at reporting on what is not there, so if a server is offline, there may not be any data in Splunk for that server, so you have to ... See more...
Splunk is go for reporting on what is in the logs, it is not so good at reporting on what is not there, so if a server is offline, there may not be any data in Splunk for that server, so you have to tell Splunk which servers to expect to find data for. This is often done by using a lookup table with the names of the servers and checking the logs against these names to find out when the last piece of information were indexed.