All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

See TERM() example https://docs.splunk.com/Documentation/Splunk/latest/Search/UseCASEandTERMtomatchphrases
Followup to previous, the SPL below shows status 'dots' in a chart.  I am prepared to use it if I can't find a pie slice coloring that will work for me. | makeresults | eval dbs = "[{\"entity\"... See more...
Followup to previous, the SPL below shows status 'dots' in a chart.  I am prepared to use it if I can't find a pie slice coloring that will work for me. | makeresults | eval dbs = "[{\"entity\":\"I0\",\"instanceCount\":\"0\",\"instanceMax\":\"3\"},{\"entity\":\"I1\",\"instanceCount\":\"1\",\"instanceMax\":\"3\"},{\"entity\":\"I2\",\"instanceCount\":\"2\",\"instanceMax\":\"3\"},{\"entity\":\"I3\",\"instanceCount\":\"3\",\"instanceMax\":\"3\"}]" | spath input=dbs path={} output=dblist | mvexpand dblist | spath input=dblist | eval pct_avail=round(100*instanceCount/instanceMax,1) | eval status=case(pct_avail=100, "🟢", pct_avail>=50, "🟡️", pct_avail>1 , "🟠" ,true(), " ") | table _time entity instanceCount instanceMax pct_avail status
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen a... See more...
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen are ranked by count of some category, and I want to rank by status.  In the example below, I have 4 groups of services, each with a number of service instances providing service up to a maximum number defined for the group.  I would like to visually see a group NofM colored by status and not ranked by count. Any ideas on where to go?  The pie chart viz is ruled out per the above (I think).  I looked for other visualizations such as the starburst,  but it didn't present the way I wanted to. Example SPL: | makeresults | eval dbs = "[{\"entity\":\"I0\",\"instanceCount\":\"0\",\"instanceMax\":\"3\"},{\"entity\":\"I1\",\"instanceCount\":\"1\",\"instanceMax\":\"3\"},{\"entity\":\"I2\",\"instanceCount\":\"2\",\"instanceMax\":\"3\"},{\"entity\":\"I3\",\"instanceCount\":\"3\",\"instanceMax\":\"3\"}]" | spath input=dbs path={} output=dblist | mvexpand dblist | spath input=dblist | eval pct_avail=round(100*instanceCount/instanceMax,1) | eval status=case(pct_avail=100, "OK", pct_avail>=50, "Warning", pct_avail>1, "Major", true(), "Critical") | eval color=case( status="Critical", "#FF0000", status="Major", "#D94E17", status="Warning", "#CBA700", status="OK", "#118832", true(), "#1182F3" ) | stats count by entity  
While the admin might have asked if that's really what you want, that's more of an architect's job to design your indexes properly. As for splitting the data - there are usually two main reasons for... See more...
While the admin might have asked if that's really what you want, that's more of an architect's job to design your indexes properly. As for splitting the data - there are usually two main reasons for splitting data into separate indexes - retention parameters and access restrictions. Just because the servers are db servers or just because they are dev servers doesn't mean that you need separate indexes. You might though if separate teams need access to logs from dev/testing/prod environments or if you need to keep data from dev for a month but from prod for two years. A good architect will also try to find out if there is a chance of a need for such differentiation in forseeable future. Another thing that could warrant separate indexes is huge difference in volume of data between sources. But all those things need to be considered on a per-case basiss. There is no one-size-fits-all solution saying how you should split your data between indexes.
From a cursive search, this seems to be an error associated with Palo Alto Firewalls. If this is an error message generated by a Palo Alto firewall, then you will likely find more relevant informatio... See more...
From a cursive search, this seems to be an error associated with Palo Alto Firewalls. If this is an error message generated by a Palo Alto firewall, then you will likely find more relevant information on the Palo Alto docs or support forum.
You can export the results of the scan in JSON format, then look inside for the individual checks and their results. Find entries with "Result":"BLOCKER", as the messages should indicate why the app ... See more...
You can export the results of the scan in JSON format, then look inside for the individual checks and their results. Find entries with "Result":"BLOCKER", as the messages should indicate why the app is failing the check, and should include the problematic file path. I use Notepad++ with the JStools extension to JSFormat and make the json file readable.  
Are you able to open the file in a text editor, then copy the content to a new file, then save it as .csv? Perhaps there is a hidden part of the file which is causing issues. You could also try openi... See more...
Are you able to open the file in a text editor, then copy the content to a new file, then save it as .csv? Perhaps there is a hidden part of the file which is causing issues. You could also try opening the lookup with the lookup editor app and then pasting the contents into the lookup editor interface, assuming the file is not too big. Some other troubleshooting steps: 1. Can you upload other CSV files? 2. Can you truncate the problem file to a smaller csv file and try uploading it? 3. Can you try saving the .CSV file using a different text processor?
Could you post the sanitized source code of your dashboard or inputs? It sounds like you are doing the right thing in using a dynamically populated 4th LOV, but maybe something isn't set right if the... See more...
Could you post the sanitized source code of your dashboard or inputs? It sounds like you are doing the right thing in using a dynamically populated 4th LOV, but maybe something isn't set right if the search is running but not populating the final LOV.
That is how I understood it should work.  However, when I create the list input and go to the Dynamic Option and create the query...it returns nothing...it just says "Populating" so I know the query ... See more...
That is how I understood it should work.  However, when I create the list input and go to the Dynamic Option and create the query...it returns nothing...it just says "Populating" so I know the query ran.  If I click on the link to run the query in a search window, I can see results.
Dropdown inputs can be set by a static list or a dynamic list or a combination of both. For a dynamic list, the search should return unique values for the list. So, in your instance, this could be fi... See more...
Dropdown inputs can be set by a static list or a dynamic list or a combination of both. For a dynamic list, the search should return unique values for the list. So, in your instance, this could be filtering from a lookup table which holds all your servers, or by an index search to pull all the servers from your logs (although you would probably still need a lookup to get the other fields).
What sort of logs do you have from your computers? Ideally you can identify a log that is only produced when the computer is online, then you could search for that log using a time selector, and it w... See more...
What sort of logs do you have from your computers? Ideally you can identify a log that is only produced when the computer is online, then you could search for that log using a time selector, and it would show which computers are online in that time.
Splunk is go for reporting on what is in the logs, it is not so good at reporting on what is not there, so if a server is offline, there may not be any data in Splunk for that server, so you have to ... See more...
Splunk is go for reporting on what is in the logs, it is not so good at reporting on what is not there, so if a server is offline, there may not be any data in Splunk for that server, so you have to tell Splunk which servers to expect to find data for. This is often done by using a lookup table with the names of the servers and checking the logs against these names to find out when the last piece of information were indexed.
Well....I suppose "best-practice" would have been a better tag.  Go figure...
One of the key attributes of an index is the retention period, so, assuming you would like to retain different sorts of information for different periods of time, then you should consider putting the... See more...
One of the key attributes of an index is the retention period, so, assuming you would like to retain different sorts of information for different periods of time, then you should consider putting them into different indexes. For example, you might want to consider keeping production information for longer than development. The different types of logs can go in the same index but the key here is to use different sourcetypes so they can be distinguished and treated differently, e.g. field extractions. So, you are right, your Admin should have asked questions like what do you want to do with the data, how long to do you want to keep it, etc. Having said that, and since you have added the summary indexing tag, you could run reports on the large index to split the useful data off into summary indexes, but then it depends on how timely you need the data e.g. as soon as it hits the index or only after the summary index report has been executed.
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server ... See more...
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server supports. I have 4 Input field LOVs setup. 1. Enclave...lets me choose Dev / Test / Prod, those are the only 3 options and the token name is "enclave" 2. Type...shows Database, Web App, SSO, Other ... again those are the only options, token name is "type" 3. Application...say HR, Payroll, Order Entry and HealthCare ... again, 4 options, token name is "app" 4.  This should be a DYNAMIC LOV that shows only the servers in the table view lookup that meet the condition set by the first 3 LOVs. ...example Enclave set to Dev, Type set to Web App, Application set to HR.  My table view clearly shows there are 2 web app server names so the 4th LOV should show Server003, Server007, All.  The token would then be set based on the choice (003 or 007) and if "All" were picked the token would be Server003, Server007.  This would drive the panel searches.   Is this possible?  I can get the 4th LOV to run but it doesn't give me a list.
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I d... See more...
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I don't see the check mark action to activate my new MetricSet:   I have followed the instructions on this page:  https://docs.splunk.com/observability/en/apm/span-tags/index-span-tags.html#index-a-new-span-tag-or-process
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associ... See more...
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associated with being up or down isn't important, just a nicety. 
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/T... See more...
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/Test/Prod environments and this is all supporting 5 different development / sustainment projects. A request was made to our Splunk Admin in the form of the Server name and all of the log files our engineer could think of at the time.  It appears the Splunk Admin just crammed everything into a single index.  Literally hundreds of log files as each server appeared to have 10-15 log files identified. Given the servers do different things, the request didn't necessarily have the same log files identified for every server.  I would have "expected" the request would have been vetted to answer "What do you really need?" rather than "HERE YOU GO!"  Maybe I've done Software Development too long, it could be me. Anyway, was this the right way to go?  Would it have made more sense to have 1 index for the Database Servers, 1 index for the Web App Servers, 1 index for the Data Warehouse, etc...?  Or, perhaps 1 index for the Production assets and 1 for Test and 1 for Dev? There doesn't appear to be a "best practice" that I can find...and what I have is ONE FREAKING HUGE index.   If you read this far, thanks.  If you have a cogent answer that makes sense to me, even better!  
You should set the LINE_BREAKER field in your props.conf in your indexer machine(s). You can also set SHOULD_LINEMERGE = false to prevent Splunk from recombining the events. [yoursourcetype] LINE_BR... See more...
You should set the LINE_BREAKER field in your props.conf in your indexer machine(s). You can also set SHOULD_LINEMERGE = false to prevent Splunk from recombining the events. [yoursourcetype] LINE_BREAKER = ^()\#{72}\n[^\#]*\#{72} SHOULD_LINEMERGE = false Since your log header includes two lines of hashes, the REGEX should find both of them.
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ##############... See more...
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ######################################################################## Thu 05/02/2024 - 8:06:13.34 ######################################################################## Parm-1 is XYZ Parm-2 is w4567 Parm-3 is 3421 Parm-4 is mclfmkf Properties file is jakjfdakohj Parm-6 is %Source_File% Parm-7 is binary Parm-8 is Parm-9 is SOURCE_DIR is mfkljfdalkj SOURCE_FILE is klnsaclkncalkn FINAL_DIR is /mail/lslk/jdslkjd/ FINAL_FILE is lkjdflkj_*.txt MFRAME is N Version (C) Copyright ************************************************* Successfully connected   I want splunk to include the ### as the first line of the event message, but I am able to get line breaker from the second line Thu 05/02/2024 - 8:06:13.34   Please let me know