All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have an issue here with the fishbucket of the Universal Forwarder. I have tried to look for quite a lot of documentation, but it seems that there is too little documentation, and there are also... See more...
Hi, I have an issue here with the fishbucket of the Universal Forwarder. I have tried to look for quite a lot of documentation, but it seems that there is too little documentation, and there are also few topics on it. The problem I am facing is that the fishbucket is taking up a large amount of space, about 2GB on the hard drive, while the limit configuration in limits.conf is:  file_tracking_db_threshold_mb = 500  In some other topics, I read that the fishbucket can be up to 2 or 3 times larger than the configured limit. And this happens because of its backup mechanism with file save and snapshot.tmp However, is there a limit to the size of the fishbucket? Will it continue to expand over time without limit, or only expand to a certain limit? PS:  i have nmon TA install on my server. Please, provide me with Splunk documentation on this part. Thank you.    
Thanks. I am on Splunk Enterprise on a local server. Do you know where the file you reference is and its name?
Hi Team, I have two event , attaching screenshot for reference 1.how to retrieve the uniqObjectIds and display in table form 2.how to retrieve the objectIds,version and display their value in diff... See more...
Hi Team, I have two event , attaching screenshot for reference 1.how to retrieve the uniqObjectIds and display in table form 2.how to retrieve the objectIds,version and display their value in different table column form first event: msg: unique objectIds name: platform-logger pid: 8 uniqObjectIds: [ [-]      275649     108976    ]    uniqObjectIdsCount: 1 second event:  event: { [-] body: { "objectType": "material", "objectIds": [ "275649" ], "version": "latest" } msg: request body The query i came closest is below but still unable to get what i wanted. Actual : Expected: in a table , i get the each object in different row .ex |uniqueIds| |275649| ||108976     index="" source IN ("") | eval PST=_time-28800 | eval PST_TIME=strftime(PST, "%Y-%d-%m %H:%M:%S") | eval split_field= split(_raw, "Z\"}") | mvexpand split_field | rex field=split_field "objectIdsCount=(?<objectIdsCount>[^,]+)" | rex field=split_field "uniqObjectIdsCount=(?<uniqObjectIdsCount>[^,]+)" | rex field=split_field "recordsCount=(?<recordsCount>[^,]+)" | rex field=split_field "sqsSentCount=(?<sqsSentCount>[^,]+)"|where objectType="material" | table_time,PST_TIME,objectType,objectIdsCount,uniqObjectIdsCount,recordsCount,sqsSentCount | sort _time desc      
Hi @herguzav , you should see your question in a different way: what are your requisites? what's the wanted result? starting from this point of view, you can analyze your logs identifying the con... See more...
Hi @herguzav , you should see your question in a different way: what are your requisites? what's the wanted result? starting from this point of view, you can analyze your logs identifying the conditions to verify and if you already have the eventtypes and fields in the DataModel. At least you can see if you really need to add a field or a constrain to the Datamodel. Only for example (because it already exists): if you need to check the failed logins on Linux, you can analyze the Linux message ("Failed Password") and create (if not exists) the related eventtype, then you can see if you have in the Data Model the requested fields (e.g. user, source_ip, etc...), if not, you can add them. Ciao. Giuseppe
Hi @olawalePS , the issue is probably related to the time format: you have different formats in yout data: 1,2 or 3 digits in milliseconds, probably your eval command correctly extracts data only wh... See more...
Hi @olawalePS , the issue is probably related to the time format: you have different formats in yout data: 1,2 or 3 digits in milliseconds, probably your eval command correctly extracts data only when it matchjes the correct format. You sould try to normalize your data, sometimes like this: | eval timestamp1=strptime(lastContactTime,"%Y-%m-%dT%H:%M:%S.%NZ"), timestamp2=strptime(lastContactTime,"%Y-%m-%dT%H:%M:%S.%2NZ"), timestamp2=strptime(lastContactTime,"%Y-%m-%dT%H:%M:%S.%3NZ") | eval timestamp=coalesce(timestamp1,timestamp2,timestamp3) Ciao. Giuseppe
Hi, What are the steps for setting up an email alert when SQL Server and SQL Agent services is down?
Hi @gcusello    no worries about that other wise thanks for your time, sure .   Karma Points are appreciated too for me
Hi @smanojkumar , You could use the "IN" operator for this scenario. Let's assume the field name is "field1", so you could construct the multi-select input like following, This would have the ... See more...
Hi @smanojkumar , You could use the "IN" operator for this scenario. Let's assume the field name is "field1", so you could construct the multi-select input like following, This would have the output of the selected values like following, field1 IN("value1", "value2"), which is same as, field1="value1" OR field1="value2" If you find the solution helpful, kindly upvote. Thanks
Hi @CSReviews , there isn't any limit to the volume of daily indexed data also in exceeding. The only limit is that you can exceed the 500MB limit only two times in 30 solar days, otherwise you'll ... See more...
Hi @CSReviews , there isn't any limit to the volume of daily indexed data also in exceeding. The only limit is that you can exceed the 500MB limit only two times in 30 solar days, otherwise you'll be in violation and searches will be blocked. Remember that there's a Splunk License for students, for more infos see at https://www.splunk.com/en_us/about-us/splunk-pledge/academic-license-application.html?locale=en_us  Ciao. Giuseppe
Hi There!    I would like to pass two values based on the selection of inputs in multiselect drill down,    Assume I'm having Multiselect options as v1, v2, v3, v4    Based on the selection Eg.... See more...
Hi There!    I would like to pass two values based on the selection of inputs in multiselect drill down,    Assume I'm having Multiselect options as v1, v2, v3, v4    Based on the selection Eg. If v1 and v2 were selected, I would like to pass "value1" "value 2" in "OR" condition to a token of a base search. Thanks in Advance!
Hello, I've set up an identity lookup using ldapsearch - it creates an identity of "username" that contains various details about the user, including the email address. It works well in identifying ... See more...
Hello, I've set up an identity lookup using ldapsearch - it creates an identity of "username" that contains various details about the user, including the email address. It works well in identifying the user as `username` and `useremail@domain'. However I'd like to also have it identify users based on `domain\username` and `username@domain' (which is actually different than `useremail` in our case) since a lot of our logs contain the user field in those formats. What's the best way to do that? 
Dear All, I have look up file with Transaction details and Transaction Name Like below. Will be great if someone suggest hot to handle below scenario.  Tran_lookup    Transaction_Details ABC     S... See more...
Dear All, I have look up file with Transaction details and Transaction Name Like below. Will be great if someone suggest hot to handle below scenario.  Tran_lookup    Transaction_Details ABC     Shopping CDE    Rent From my splunk index i am running Stats command like below (Tran from index = Tran_lookup) from  count(Tran) as count , Avg( responstime) as avgrt by Tran  I need to add matching Transaction_Details from lookup  to the final stats results: Current Results: Tran   Count avgrt Required Results (Matching Transaction_Details  to be pulled based on Tran )  Tran Transaction_Details  Count avgrt
Hello, I am looking to use Splunk free edition to teach students about searching through logs. I plan on setting up Splunk within a virtual environment, generating logs, and then exporting the data. ... See more...
Hello, I am looking to use Splunk free edition to teach students about searching through logs. I plan on setting up Splunk within a virtual environment, generating logs, and then exporting the data. Then having students install Splunk on their own machines and import the generated data.  On the free edition, it states "Are you planning to ingest a large (over 500 MB per day) data set only once, and then analyze it? The Splunk Free license lets you bulk load a much larger data sets up to 2 times within a 30 day period". My question- What is the maximum data that can be imported at a single time? Although the virtual environment will be small, only a few workstations and servers, I am worried that the sample data sets I generate might be too large. Thank you
thank you
No, I am not using DB connect as that is a sort of limitation in my project. As i am new to splunk, looking for some help in visualizing data in tabular format.   
Hello partners I request your kind support as I intend to activate the Linux ESCU correlations, however these do not work well because the datamodels are not complete, I know they are necessary, but... See more...
Hello partners I request your kind support as I intend to activate the Linux ESCU correlations, however these do not work well because the datamodels are not complete, I know they are necessary, but my observation is that the Linux events do not contain all the values ​​necessary to fill the datamodel. So my question to the community is the following: What audit, messages or syslog rules must be active for the correct collection of events?
Thank you Rich!  Does it seem odd though that the underlying size of the events hasn't changed through this time?  What might be causing it to have to search through more slices for the same style fo... See more...
Thank you Rich!  Does it seem odd though that the underlying size of the events hasn't changed through this time?  What might be causing it to have to search through more slices for the same style format and volume of data?
| where command!="jira" AND command!="macro" AND command!="filename"
You said you were getting high numbers - the change I suggested was just to the last line of your search so instead of counting the distinct values, you listed them, so you could investigate why ther... See more...
You said you were getting high numbers - the change I suggested was just to the last line of your search so instead of counting the distinct values, you listed them, so you could investigate why there were so many.
Thank you for the response. I did manage to figure out my issue. First was the use of the multiple lookups, when I created the first lookup, I used sort, that limited my results to > 5000, and I need... See more...
Thank you for the response. I did manage to figure out my issue. First was the use of the multiple lookups, when I created the first lookup, I used sort, that limited my results to > 5000, and I needed < 30k. I fixed that, creating the Inputlookup ACResults.csv without the sort value that was limiting my results. (inputlookup was from Active Directory).   Then used the following search:   index=Myindex host=xx.xx.xx.xx "AAA user accounting Successful"   | dedup user   Then used lookup for where the user field values matched the field cn from my lookup:   | lookup ACResults.csv cn as user   Final result of my new search: index=Myindex host=xx.xx.xx.xx "AAA user accounting Successful" | dedup user | lookup ACResults.csv cn as user | eval Sector=extensionAttribute14 | stats count by Sector | sort -count   Answering your questions: -What is the relationship between the field you tabled ("user") and all the lookup tables? -user = cn from active directory -And the relationship with "field_stats_wanted"? -extensionAttribute14 for that user (cn) from Active Directory - Most importantly, why is inputlookup even considered? -all my inputlookups had the same fields, so appending would make it easier to search, (I thought), I was wrong. -It usually means that the problem is not clearly understood. - that was true, but I learned.   I hope this helps for future users. Thank you all the same.