All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I question the requirement on a few levels.  First, "gather all data" is a huge task.  Presumably, your Splunk environment has ingested multiple terabytes of data over time.  Gathering it all is imp... See more...
I question the requirement on a few levels.  First, "gather all data" is a huge task.  Presumably, your Splunk environment has ingested multiple terabytes of data over time.  Gathering it all is impractical. Second, "visually readable format".  It's not only somewhat redundant, but also very vague.  How should the data be presented?  A text dump of every event ever received by Splunk would comply with the requirement, but probably would not be well received by executives. Third, this sounds like a typical management directive where those asking don't know what they want. Push back and ask for more information.  What problem are they trying to solve?  Do executives really care about (or even understand) indexes and sourcetypes?  They probably don't and are more interested in high-level metrics like storage cost trends or number of incidents detected.
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable form... See more...
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable format for executives I have been trying very many things to accomplish this, such as, using Enterprise Security > Audit> Index Audit and Forwarder Audit. Trying to create custom classic dashboards and using Dashboard studio to play around with the data. Nothing seems to give me what I need.  I have also tried the following:  | tstats  values(source) as sources ,values(sourcetype) as sourcetype where index=* by host | lookup dnslookup clienthost as host OUTPUT  clientip as src_ip This method is very resource intensive and provides me with the information I need but the Source and Sourcetypes are incredibly long and make the table not easy to read for executives. Is there another way to do this? 
Try counting the number of indexes for each EventId. index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count, dc(index) as indexCount by EventId | ... See more...
Try counting the number of indexes for each EventId. index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count, dc(index) as indexCount by EventId | search count < 2 OR indexCount=1 Also, the append command is inefficient and not necessary in this case.  Try this index=index1 OR (index=index2 sourcetype="api") | rename Number__c as EventId | stats count, dc(index) as indexCount by EventId | search count < 2 OR indexCount=1  
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory ... See more...
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory used by a host. If I apply static thresholds, I know there's an issue only when the last detected value exceeds a fixed number (we can call this "the old style monitoring"  ). On the other hand, if I use ITSI adaptive thresholding, the boundary will adapt itself using historical data. This solution would be great, but let's imagine that the swap memory used by the system slowly but continuously grows over days and weeks. At a certain point, it will reach 100%, but the KPI state will say "normal" because that value is, in some way, aligned with previous ones. Is there a way to use the adaptive thresholding behavior while keeping the "emergency" values fixed? Thanks in advance. Happy Splunking!
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue set... See more...
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue setting it via conf file and then restarting the splunk service but this is boring. Do you encountered this issue? How to resolve that?   NOTE: Environment Search Head Cluster Splunk Enterprise rel. 9.4.0    
Hi @navan1 , only one question: do you want to search in a defined field or in all the events raw? if in one field (user) that's the same both in main search and lookup, please try this: index="my... See more...
Hi @navan1 , only one question: do you want to search in a defined field or in all the events raw? if in one field (user) that's the same both in main search and lookup, please try this: index="my_index" sourcetype="my_sourcetype" [ | inputlookup users_list.csv | fields user ] | table app action signinDateTime user shortname  if you want to perform a full text search of the lookup user values in the main search, you can try: index="my_index" sourcetype="my_sourcetype" [ | inputlookup users_list.csv | rename user AS query | fields query ] | table app action signinDateTime user shortname Ciao. Giuseppe
Hi @jkamdar , Windows could be ok for a lab, not for a production system! First question: is a stand-alone server or a distributed environment? If a stand-alone server it's simple and I can give y... See more...
Hi @jkamdar , Windows could be ok for a lab, not for a production system! First question: is a stand-alone server or a distributed environment? If a stand-alone server it's simple and I can give you some tips: start from the same Splunk Version, copy the apps from the old to the new one, modify eventual monitor inputs using the new path If instead it's a distributed environment, you can copy the indexes.conf files in one app containing all the indexes definitions, and all the apps in the search Heads. For the cluster or distributed search configurations,it's easier start as a new infrastructure, configuring all the connections. These are few pillows but the easiest way is to start from the beginning copying one by one the indexes files. The main issue is to migrate data. Ciao. Giuseppe
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | app... See more...
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count by EventId | search count < 2   What i would like to do now is evaluate that there is a single record from each index for each EventId, to ensure that the count of 2 isn't 2 records in a single index. There are times where, in index2, a single EventId has more than one record which makes the count inaccurate because it's not evaluating whether there was a record for it in index1.  
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to under... See more...
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to understand is, if an alert has been triggered and the issue has been taken care, how do we acknowledge the alert. I am unfamiliar with the UI of the trackme. My version is: 2.1.7 The one I have make in circle is no of alerts which has triggered. If lets say the issue is fixed for one of the sourcetype. But the number is still showing as 4. Could some one please explain.
Does the axis show todays date but no data fill? or is it that the axis is cut off as well?
Hello,  I have lookup table which contain fields as below.   user                       shortname email 1                     name1 email 2                     name2 I wanted to search s... See more...
Hello,  I have lookup table which contain fields as below.   user                       shortname email 1                     name1 email 2                     name2 I wanted to search specific index and find whether the users in the lookup table logged in to any app for past 1 month.   I am trying something like this and not getting exact match with users in the lookup table. Please help here.   | inputlookup users_list.csv |join user type=outer [|search index="my_index" sourcetype="my_sourcetype" | fields app action signinDateTime user shortname ] |table app action signinDateTime user shortname
Hi,  Yes, you read it right. One of my small lab is planning to migrate their Splunk deployment from RHEL to Windows. Their main reason is, they do not have a Linux admin.    I am going to help ... See more...
Hi,  Yes, you read it right. One of my small lab is planning to migrate their Splunk deployment from RHEL to Windows. Their main reason is, they do not have a Linux admin.    I am going to help them migrate but I am Linux admin and never done any migration from one platform to another. Has anyone done that? Any tips on how to go about doing it?
As per the subject, I'm attempting to convert a rex expression in my search into a proper field extraction using the Field Extractor so I can drop the rex and use the field in my base search directly... See more...
As per the subject, I'm attempting to convert a rex expression in my search into a proper field extraction using the Field Extractor so I can drop the rex and use the field in my base search directly. The rex expression works perfectly but requires the use of max_match=0 in order to get all the results. Unless I'm mistaken(which is very posible), I don't have that option available in the Field Extractor, and because of that, the regex only picks up one value instead of multiple. I've tested the regex on regex101, and it works fine there, grabbing all the values properly. It's just in the Field Extractor that it isn't grabbing stuff. The rex expression itself does use a specific field rather than just running on _raw, but the search does work when running on _raw(I verified) The rex expression is placed below followed by the regex itself.     rex field=AttachmentDetails max_match=0 "(?:'(?<attachments>.*?)'.*?'fileHash': '(?<sha256>\w+)'}.*?\{.*?\}\}[,\}]\s?)"         (?:'(?<attachments>.*?)'.*?'fileHash': '(?<sha256>\w+)'}.*?\{.*?\}\}[,\}]\s?)         Below, I've placed some test data you can use on regex101 to verify the expression above. It captures both fields on the site, but just not in Splunk, and I can't tell why. Perhaps I've misunderstood how grouping works in regex. orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, {'NotSecrets!!.txt': 'fileHash': 'a3b9adaee5b83973e8789edd7b04b95f25412c764c8ff29d0c63abf25b772646'}, {}}, 'Secrets!!.txt': 'fileHash': 'c092a4db704b9c6f61d6a221b8f0ea5f719e7f674f66fede01a522563687d24b'}, {}}} orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc,
It’s best to use local splunk user for this even all other users are SAML or SSO users. User needs to have role (unless you can use only predefined reports which are run as owner) and if/when user ha... See more...
It’s best to use local splunk user for this even all other users are SAML or SSO users. User needs to have role (unless you can use only predefined reports which are run as owner) and if/when user has role in splunk he/she can always log in. So @richgalloway ‘s solution solve this issue.
Or you can do it on cli also https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/LicenserCLIcommands There are other license stuff also behind above link too.
The license file only needs to be installed on the License Manager (your SHCD).  Sign in to the Deployer and go to Settings->Licensing.  Then click the Add License button and follow the prompts to up... See more...
The license file only needs to be installed on the License Manager (your SHCD).  Sign in to the Deployer and go to Settings->Licensing.  Then click the Add License button and follow the prompts to upload to copy-paste the license file.  That's it.  Once the new license is installed, you'll see it on the License page.
I think that as those are usually ingested directly in windows nodes via correct input method, this is not a normal use case. Those who needs to investigate those files later usually already knows how... See more...
I think that as those are usually ingested directly in windows nodes via correct input method, this is not a normal use case. Those who needs to investigate those files later usually already knows how this can do.
Create an authentication token for the API user and give them only that.  Without a password (which must be defined, but doesn't have to be disclosed), the user cannot log in to the UI.
Hi there, I am confused about the configuration steps for getting data in from Salesforce. Adding a Salesforce account, I want to use OAuth, but I am only a Splunk Admin, the configured technical us... See more...
Hi there, I am confused about the configuration steps for getting data in from Salesforce. Adding a Salesforce account, I want to use OAuth, but I am only a Splunk Admin, the configured technical user to be used is managed by our Salesforce Admin. My understanding is, that either one of us needs to have the capabilities as an admin on both instances to make it work? What we tried: Configuration of the user on Salesforce-side from another account, which is an Salesforce admin. Configuration of the add-on on Splunk-side with my admin account. The redirect link has been added to Salesforce, I tried to setup the add-on on Splunk as explained in the documentation of add-on for salesforce, but an error occurs after trying to connect them. Another hindrance is the use of LDAP. To make it work, I would need to give Salesforce admin Splunk admin capabilities or the other way around, I would need to get Salesforce admin rights. But that is something we do not want as the capabilities should remain as it is: Splunk for Splunk, Salesforce for Salesforce. Is there any other way to make it work, with the use of a technical user? Or is it just not possible with OAuth?   Best regards
Hi, I am currently working on an Adaptive Response that notifies us whenever there is a Notable in our queue of a certain urgency. The notification must include rule title and its configured urgency... See more...
Hi, I am currently working on an Adaptive Response that notifies us whenever there is a Notable in our queue of a certain urgency. The notification must include rule title and its configured urgency. I've been trying to solve this with the Add-On Builder but so far only managed to pull the rule title via helper.settings.get("search_name"). I tried to get the urgency with get_events() but that only seems to contain the details of the correlation search. Does anyone have a pointer of what Im missing?