All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Avantika , where did you locate these conf files? they must be located in the first full Splunk instance the data pass through, in other words in the first Heavy Forwarder or, if not present, i... See more...
Hi @Avantika , where did you locate these conf files? they must be located in the first full Splunk instance the data pass through, in other words in the first Heavy Forwarder or, if not present, in the Indexers. Ciao. Giuseppe
I totally understand where you are coming from and what you are saying.  Alas, I think at this point in time management is attempting to understand what Splunk is collecting so that we can better un... See more...
I totally understand where you are coming from and what you are saying.  Alas, I think at this point in time management is attempting to understand what Splunk is collecting so that we can better understand what Splunk might be potentially missing (such as, when someone stands up a server and doesn't tell someone). I have broken metrics down by time in a more readable format like (last 30 minutes or 24 hours) to test the SPL queries that I've been attempting.  That is why I have been focused on organizing the data by Host, Sourcetype, Source, and Index so that I could capture everything but understand the resource intensity associated with it. Additionally, I created a dashboard studio that showcases each data point listed above in their own tab, still showcases everything but isn't in one instance or table. 
Whoops, apologies, posted on the wrong board.  If a mod could help me move it to a more appropriate board that'd be excellent.
What does currentDBsizeMB actually represent?   Seeing some discrepancies in the actual file system consumption between our indexers and cold storage volumes (which are nvme over IP mounts on differ... See more...
What does currentDBsizeMB actually represent?   Seeing some discrepancies in the actual file system consumption between our indexers and cold storage volumes (which are nvme over IP mounts on different servers) Does currentDBsizeMB include just hot? hot/warm? or hot/warm/cold?  Does it include replica copies or just one and you have to multiply the value by your replication factor to get the "true" size of the index on disk? I have been unable to find a definitive answer on this, appreciate anyone in advance that can help shed some light on this field.  
I question the requirement on a few levels.  First, "gather all data" is a huge task.  Presumably, your Splunk environment has ingested multiple terabytes of data over time.  Gathering it all is imp... See more...
I question the requirement on a few levels.  First, "gather all data" is a huge task.  Presumably, your Splunk environment has ingested multiple terabytes of data over time.  Gathering it all is impractical. Second, "visually readable format".  It's not only somewhat redundant, but also very vague.  How should the data be presented?  A text dump of every event ever received by Splunk would comply with the requirement, but probably would not be well received by executives. Third, this sounds like a typical management directive where those asking don't know what they want. Push back and ask for more information.  What problem are they trying to solve?  Do executives really care about (or even understand) indexes and sourcetypes?  They probably don't and are more interested in high-level metrics like storage cost trends or number of incidents detected.
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable form... See more...
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable format for executives I have been trying very many things to accomplish this, such as, using Enterprise Security > Audit> Index Audit and Forwarder Audit. Trying to create custom classic dashboards and using Dashboard studio to play around with the data. Nothing seems to give me what I need.  I have also tried the following:  | tstats  values(source) as sources ,values(sourcetype) as sourcetype where index=* by host | lookup dnslookup clienthost as host OUTPUT  clientip as src_ip This method is very resource intensive and provides me with the information I need but the Source and Sourcetypes are incredibly long and make the table not easy to read for executives. Is there another way to do this? 
Try counting the number of indexes for each EventId. index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count, dc(index) as indexCount by EventId | ... See more...
Try counting the number of indexes for each EventId. index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count, dc(index) as indexCount by EventId | search count < 2 OR indexCount=1 Also, the append command is inefficient and not necessary in this case.  Try this index=index1 OR (index=index2 sourcetype="api") | rename Number__c as EventId | stats count, dc(index) as indexCount by EventId | search count < 2 OR indexCount=1  
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory ... See more...
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory used by a host. If I apply static thresholds, I know there's an issue only when the last detected value exceeds a fixed number (we can call this "the old style monitoring"  ). On the other hand, if I use ITSI adaptive thresholding, the boundary will adapt itself using historical data. This solution would be great, but let's imagine that the swap memory used by the system slowly but continuously grows over days and weeks. At a certain point, it will reach 100%, but the KPI state will say "normal" because that value is, in some way, aligned with previous ones. Is there a way to use the adaptive thresholding behavior while keeping the "emergency" values fixed? Thanks in advance. Happy Splunking!
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue set... See more...
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue setting it via conf file and then restarting the splunk service but this is boring. Do you encountered this issue? How to resolve that?   NOTE: Environment Search Head Cluster Splunk Enterprise rel. 9.4.0    
Hi @navan1 , only one question: do you want to search in a defined field or in all the events raw? if in one field (user) that's the same both in main search and lookup, please try this: index="my... See more...
Hi @navan1 , only one question: do you want to search in a defined field or in all the events raw? if in one field (user) that's the same both in main search and lookup, please try this: index="my_index" sourcetype="my_sourcetype" [ | inputlookup users_list.csv | fields user ] | table app action signinDateTime user shortname  if you want to perform a full text search of the lookup user values in the main search, you can try: index="my_index" sourcetype="my_sourcetype" [ | inputlookup users_list.csv | rename user AS query | fields query ] | table app action signinDateTime user shortname Ciao. Giuseppe
Hi @jkamdar , Windows could be ok for a lab, not for a production system! First question: is a stand-alone server or a distributed environment? If a stand-alone server it's simple and I can give y... See more...
Hi @jkamdar , Windows could be ok for a lab, not for a production system! First question: is a stand-alone server or a distributed environment? If a stand-alone server it's simple and I can give you some tips: start from the same Splunk Version, copy the apps from the old to the new one, modify eventual monitor inputs using the new path If instead it's a distributed environment, you can copy the indexes.conf files in one app containing all the indexes definitions, and all the apps in the search Heads. For the cluster or distributed search configurations,it's easier start as a new infrastructure, configuring all the connections. These are few pillows but the easiest way is to start from the beginning copying one by one the indexes files. The main issue is to migrate data. Ciao. Giuseppe
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | app... See more...
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count by EventId | search count < 2   What i would like to do now is evaluate that there is a single record from each index for each EventId, to ensure that the count of 2 isn't 2 records in a single index. There are times where, in index2, a single EventId has more than one record which makes the count inaccurate because it's not evaluating whether there was a record for it in index1.  
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to under... See more...
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to understand is, if an alert has been triggered and the issue has been taken care, how do we acknowledge the alert. I am unfamiliar with the UI of the trackme. My version is: 2.1.7 The one I have make in circle is no of alerts which has triggered. If lets say the issue is fixed for one of the sourcetype. But the number is still showing as 4. Could some one please explain.
Does the axis show todays date but no data fill? or is it that the axis is cut off as well?
Hello,  I have lookup table which contain fields as below.   user                       shortname email 1                     name1 email 2                     name2 I wanted to search s... See more...
Hello,  I have lookup table which contain fields as below.   user                       shortname email 1                     name1 email 2                     name2 I wanted to search specific index and find whether the users in the lookup table logged in to any app for past 1 month.   I am trying something like this and not getting exact match with users in the lookup table. Please help here.   | inputlookup users_list.csv |join user type=outer [|search index="my_index" sourcetype="my_sourcetype" | fields app action signinDateTime user shortname ] |table app action signinDateTime user shortname
Hi,  Yes, you read it right. One of my small lab is planning to migrate their Splunk deployment from RHEL to Windows. Their main reason is, they do not have a Linux admin.    I am going to help ... See more...
Hi,  Yes, you read it right. One of my small lab is planning to migrate their Splunk deployment from RHEL to Windows. Their main reason is, they do not have a Linux admin.    I am going to help them migrate but I am Linux admin and never done any migration from one platform to another. Has anyone done that? Any tips on how to go about doing it?
As per the subject, I'm attempting to convert a rex expression in my search into a proper field extraction using the Field Extractor so I can drop the rex and use the field in my base search directly... See more...
As per the subject, I'm attempting to convert a rex expression in my search into a proper field extraction using the Field Extractor so I can drop the rex and use the field in my base search directly. The rex expression works perfectly but requires the use of max_match=0 in order to get all the results. Unless I'm mistaken(which is very posible), I don't have that option available in the Field Extractor, and because of that, the regex only picks up one value instead of multiple. I've tested the regex on regex101, and it works fine there, grabbing all the values properly. It's just in the Field Extractor that it isn't grabbing stuff. The rex expression itself does use a specific field rather than just running on _raw, but the search does work when running on _raw(I verified) The rex expression is placed below followed by the regex itself.     rex field=AttachmentDetails max_match=0 "(?:'(?<attachments>.*?)'.*?'fileHash': '(?<sha256>\w+)'}.*?\{.*?\}\}[,\}]\s?)"         (?:'(?<attachments>.*?)'.*?'fileHash': '(?<sha256>\w+)'}.*?\{.*?\}\}[,\}]\s?)         Below, I've placed some test data you can use on regex101 to verify the expression above. It captures both fields on the site, but just not in Splunk, and I can't tell why. Perhaps I've misunderstood how grouping works in regex. orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, {'NotSecrets!!.txt': 'fileHash': 'a3b9adaee5b83973e8789edd7b04b95f25412c764c8ff29d0c63abf25b772646'}, {}}, 'Secrets!!.txt': 'fileHash': 'c092a4db704b9c6f61d6a221b8f0ea5f719e7f674f66fede01a522563687d24b'}, {}}} orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc,
It’s best to use local splunk user for this even all other users are SAML or SSO users. User needs to have role (unless you can use only predefined reports which are run as owner) and if/when user ha... See more...
It’s best to use local splunk user for this even all other users are SAML or SSO users. User needs to have role (unless you can use only predefined reports which are run as owner) and if/when user has role in splunk he/she can always log in. So @richgalloway ‘s solution solve this issue.
Or you can do it on cli also https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/LicenserCLIcommands There are other license stuff also behind above link too.
The license file only needs to be installed on the License Manager (your SHCD).  Sign in to the Deployer and go to Settings->Licensing.  Then click the Add License button and follow the prompts to up... See more...
The license file only needs to be installed on the License Manager (your SHCD).  Sign in to the Deployer and go to Settings->Licensing.  Then click the Add License button and follow the prompts to upload to copy-paste the license file.  That's it.  Once the new license is installed, you'll see it on the License page.