All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well....I suppose "best-practice" would have been a better tag.  Go figure...
One of the key attributes of an index is the retention period, so, assuming you would like to retain different sorts of information for different periods of time, then you should consider putting the... See more...
One of the key attributes of an index is the retention period, so, assuming you would like to retain different sorts of information for different periods of time, then you should consider putting them into different indexes. For example, you might want to consider keeping production information for longer than development. The different types of logs can go in the same index but the key here is to use different sourcetypes so they can be distinguished and treated differently, e.g. field extractions. So, you are right, your Admin should have asked questions like what do you want to do with the data, how long to do you want to keep it, etc. Having said that, and since you have added the summary indexing tag, you could run reports on the large index to split the useful data off into summary indexes, but then it depends on how timely you need the data e.g. as soon as it hits the index or only after the summary index report has been executed.
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server ... See more...
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server supports. I have 4 Input field LOVs setup. 1. Enclave...lets me choose Dev / Test / Prod, those are the only 3 options and the token name is "enclave" 2. Type...shows Database, Web App, SSO, Other ... again those are the only options, token name is "type" 3. Application...say HR, Payroll, Order Entry and HealthCare ... again, 4 options, token name is "app" 4.  This should be a DYNAMIC LOV that shows only the servers in the table view lookup that meet the condition set by the first 3 LOVs. ...example Enclave set to Dev, Type set to Web App, Application set to HR.  My table view clearly shows there are 2 web app server names so the 4th LOV should show Server003, Server007, All.  The token would then be set based on the choice (003 or 007) and if "All" were picked the token would be Server003, Server007.  This would drive the panel searches.   Is this possible?  I can get the 4th LOV to run but it doesn't give me a list.
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I d... See more...
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I don't see the check mark action to activate my new MetricSet:   I have followed the instructions on this page:  https://docs.splunk.com/observability/en/apm/span-tags/index-span-tags.html#index-a-new-span-tag-or-process
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associ... See more...
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associated with being up or down isn't important, just a nicety. 
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/T... See more...
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/Test/Prod environments and this is all supporting 5 different development / sustainment projects. A request was made to our Splunk Admin in the form of the Server name and all of the log files our engineer could think of at the time.  It appears the Splunk Admin just crammed everything into a single index.  Literally hundreds of log files as each server appeared to have 10-15 log files identified. Given the servers do different things, the request didn't necessarily have the same log files identified for every server.  I would have "expected" the request would have been vetted to answer "What do you really need?" rather than "HERE YOU GO!"  Maybe I've done Software Development too long, it could be me. Anyway, was this the right way to go?  Would it have made more sense to have 1 index for the Database Servers, 1 index for the Web App Servers, 1 index for the Data Warehouse, etc...?  Or, perhaps 1 index for the Production assets and 1 for Test and 1 for Dev? There doesn't appear to be a "best practice" that I can find...and what I have is ONE FREAKING HUGE index.   If you read this far, thanks.  If you have a cogent answer that makes sense to me, even better!  
You should set the LINE_BREAKER field in your props.conf in your indexer machine(s). You can also set SHOULD_LINEMERGE = false to prevent Splunk from recombining the events. [yoursourcetype] LINE_BR... See more...
You should set the LINE_BREAKER field in your props.conf in your indexer machine(s). You can also set SHOULD_LINEMERGE = false to prevent Splunk from recombining the events. [yoursourcetype] LINE_BREAKER = ^()\#{72}\n[^\#]*\#{72} SHOULD_LINEMERGE = false Since your log header includes two lines of hashes, the REGEX should find both of them.
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ##############... See more...
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ######################################################################## Thu 05/02/2024 - 8:06:13.34 ######################################################################## Parm-1 is XYZ Parm-2 is w4567 Parm-3 is 3421 Parm-4 is mclfmkf Properties file is jakjfdakohj Parm-6 is %Source_File% Parm-7 is binary Parm-8 is Parm-9 is SOURCE_DIR is mfkljfdalkj SOURCE_FILE is klnsaclkncalkn FINAL_DIR is /mail/lslk/jdslkjd/ FINAL_FILE is lkjdflkj_*.txt MFRAME is N Version (C) Copyright ************************************************* Successfully connected   I want splunk to include the ### as the first line of the event message, but I am able to get line breaker from the second line Thu 05/02/2024 - 8:06:13.34   Please let me know    
That does seem like that would work as far as getting the results I want, though it leaves one of my issues unsolved.  The parent query "index=ind1 earliest=-1d field1=abc" returns many, many results... See more...
That does seem like that would work as far as getting the results I want, though it leaves one of my issues unsolved.  The parent query "index=ind1 earliest=-1d field1=abc" returns many, many results without the inclusion of some filter on field2.  My initial approach (plus your fix for it) filter those results after that broad search is done which isn't great from a performance perspective.  Perhaps I'm better off just using a join at that point, not sure. Anyway, thanks for the reply 
need query to remove duplicates from count stats Sample input event  email abc      xyz@email.com abc    xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com Expected outpu... See more...
need query to remove duplicates from count stats Sample input event  email abc      xyz@email.com abc    xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com Expected output  event count abc 2 xyz 1 what I am getting  event count abc 4 xyz 1
Hi everyone I updated the version of my database agent and by default, appdynamics set the name like a "default database agent", but I need customize the name for each one, but I could find out wher... See more...
Hi everyone I updated the version of my database agent and by default, appdynamics set the name like a "default database agent", but I need customize the name for each one, but I could find out where set up this configuration. Can anyone help me to know where change the database agent name? thank's
thanks sir   I was thinking something complex, but you made it very simple.
Hi @Satish.Kumar Yadav, Have you been able to check out the past two replies? If one of them has answered your question, click the "Accept as Solution" button for the reply that did. If you stil... See more...
Hi @Satish.Kumar Yadav, Have you been able to check out the past two replies? If one of them has answered your question, click the "Accept as Solution" button for the reply that did. If you still need help or have follow up questions, reply back to keep the conversation going.
I get the error showed in the title when tying to upload a csv as  lookup. I tried the solution mentioned here:  https://community.splunk.com/t5/Splunk-Search/What-does-the-error-quot-File-has-no-li... See more...
I get the error showed in the title when tying to upload a csv as  lookup. I tried the solution mentioned here:  https://community.splunk.com/t5/Splunk-Search/What-does-the-error-quot-File-has-no-line-endings-quot-for-a/m-p/322387 but that doesn't work. Any suggestions? 
Hi @Srujana.Mora, Looking into this for you! I was having trouble downloading the file too. 
I get weekly email updates with results from weekly URA scans. After noticing that we had outdated apps we rolled out updates for three public apps, Sankey Diagram, Scalable Vector Graphics and Splun... See more...
I get weekly email updates with results from weekly URA scans. After noticing that we had outdated apps we rolled out updates for three public apps, Sankey Diagram, Scalable Vector Graphics and Splunk Dashboard Examples. In our testing environment URA is now content and all apps pass jQuery scans without issues. However, in our production environment URA scan still fails in all three apps. It does not specify which files or of there is a problem om one or all instances so I don’t know what is causing the results. I have double and triple checked the apps comparing hash values for every file both on the deployment server and on all individual test and production search heads. Everything except for the “install hash” in “meta.local” is identical in both test and production environment. Apps are all identical between cluster members in test and production environment respectively. There are not additional files present on any search head in the production environment. Why is URA still failing these apps only in the production environment? How can I identify the reason for the scan failures as I they should all pass in both environments, being identical and all. Any and all suggestions are most welcome All the best
Hi, Thanks for asking your question on the community. Please check out this Community Knowledge Base Article and let me know if it helps you out. https://community.appdynamics.com/t5/Knowledge-Bas... See more...
Hi, Thanks for asking your question on the community. Please check out this Community Knowledge Base Article and let me know if it helps you out. https://community.appdynamics.com/t5/Knowledge-Base/How-does-AppDynamics-license-consumption-work/ta-p/34449
https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/MigrateKVstore#Migrate_the_KV_store_in_a_single-instance_deployment Seems your first install of Splunk contained MongoDB for KVStores.  Very ... See more...
https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/MigrateKVstore#Migrate_the_KV_store_in_a_single-instance_deployment Seems your first install of Splunk contained MongoDB for KVStores.  Very early in the release of 9 was supposed to contain an out of band upgrade from mmap to wiredTiger.  The link above is how the DB upgrade was to be handled and may still work for you.  I would recommend reaching out to Splunk support if you have a contract with them for support while you process this now.
Replied in the wrong thread, ignore!
Hi @_olivier_ , it seems to be a comma separated file, in this case, you must put props.conf also in the UF. Ciao. Giuseppe