All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks everyone for your response. The issue was due to DATETIME_CONFIG setting in  props.conf .It was set to custom value which was causing packets to drop. setting it DATETIME_CONFIG = NONE helped ... See more...
Thanks everyone for your response. The issue was due to DATETIME_CONFIG setting in  props.conf .It was set to custom value which was causing packets to drop. setting it DATETIME_CONFIG = NONE helped resolve the issue 
Thank you for your help! The provided URLs are very useful.
Hi @askargbo  I the same thing as you, where I did a jump upgrade 9.1 to 9.3, how did you solve the problem? can you share how to solve it? thanks
Are you doing indexed extractions on the JSON data - that's not such a good idea as it can bloat your index with stuff you don't need there. The question is not about "optimising for large datasets"... See more...
Are you doing indexed extractions on the JSON data - that's not such a good idea as it can bloat your index with stuff you don't need there. The question is not about "optimising for large datasets", it's more about using the right queries for the data you have, large or small. I suggest you post some example queries you have, as the community can offer some advice on whether they are good or not so good - use the code block syntax button above <>  For  See my post in another thread about performance https://community.splunk.com/t5/Splunk-Search/Best-Search-Performance-when-adding-filtering-of-events-to-query/m-p/750038#M242251 As @PickleRick says, the job inspector is your friend (see scanCount) and reducing that number will improve searches.  Use subsearches sparingly, avoid joins and transaction - they are almost never necessary. Summary indexing itself will not necessarily speed up your searches, particularly if the search that creates the summary index is bad and the search that searches the summary index is also bad.  A summary index does not mean faster - it's just another index with data and you can still write bad searches against that. Please share some of your worst searches and we can try to help.    
Hi all, Multiple universal forwarders are installed on both Windows and Linux, and they work fine. The deployment server forwarder management tabs no longer show them; however, after making changes ... See more...
Hi all, Multiple universal forwarders are installed on both Windows and Linux, and they work fine. The deployment server forwarder management tabs no longer show them; however, after making changes to apps in /opt/splunk/etc/deployment-apps/app, they called the deployment server and received the changes, but still have issues with managing them. I found a lot of logs on the search-head when I checked the internal index: INFO DC:DeploymentClient [8072 PhonehomeThread] - channel=deploymentServer/phoneHome/default Will retry sending phonehome to DS; err=not_connected There is no problem connecting from UF to DS on port TCP 8089. Does anyone have any ideas on how I could solve this? DS version = 9.3.1 UF version = 9.3.1 $splunk show deploy-poll Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" Deployment Server URI is set to "10.121.29.10:8089".
Its not make any sense there is no option for this diagram and support for this... I need read more about this Link Graph.   What is the best way to build a diagram for network (ip,vip,fw,subnet...)
Not necessarily. There are separate addons for specific services (separate one for Teams, another for Security (Defender and Defender for endpoint) and so on). This one will cover getting data from ... See more...
Not necessarily. There are separate addons for specific services (separate one for Teams, another for Security (Defender and Defender for endpoint) and so on). This one will cover getting data from Event Hub but you might need another addon to parse your data properly and map fields to CIM. I'm not sure though if the fact that you're pushing the data through Event Hub won't mangle the events since some of those addons expect the inputs to run differently (Graph API?). You need to go to Splunkbase, type in "microsoft" and check it out
Hi @LS1 , you should try something like this: index=security action IN ("Blocked", "Started", "Success") I hinted to click on the value to be sure that the syntax is correct. Ciao. Giuseppe
Hi @Nawab , if an LDAP user didn't login to Splunk, you don't see it, you can see only users that logged in at least one time. To see the logged in users and the last login timestamp, you can read ... See more...
Hi @Nawab , if an LDAP user didn't login to Splunk, you don't see it, you can see only users that logged in at least one time. To see the logged in users and the last login timestamp, you can read a simpe search like the following: index=_audit action=success sourcetype=audittrail | stats latest(_time) AS _time count BY user It's the same thing if you try to see by GUI the list of users in [Settings > Users]: you can see only internal users and the LDAP users that logged in. Ciao. Giuseppe
Hello GCusello, yes I clicked on the word(s) "Blocked" and "Started" in the "Action" field window.  When I use the query index=security action="*" all three actions: Blocked, Started and  Success app... See more...
Hello GCusello, yes I clicked on the word(s) "Blocked" and "Started" in the "Action" field window.  When I use the query index=security action="*" all three actions: Blocked, Started and  Success appear as shown in my original question. If I click on "Success" all of my events are returned, when I click on the other two, my results are "No results found".  I went down the list of Interesting Fields and tried all of the fields labeled with an   (not sure how to type that one) instead of an octothorp (#) and every one of them worked properly.  When I say I tried, I mean I opened the Interesting Fields and clicked on the desired selection, which alters the search criteria, the same way I have done with Blocked and Started.  I do not know how the categories get created in the Interesting Fields but it appears there is something wrong with Blocked and Started.
@siv  Dashboard Studio does not support custom visualizations (like Network Diagram Viz from Splunkbase). These visualizations are only supported in Classic (Simple XML) dashboards. If you want to ... See more...
@siv  Dashboard Studio does not support custom visualizations (like Network Diagram Viz from Splunkbase). These visualizations are only supported in Classic (Simple XML) dashboards. If you want to stay in Dashboard Studio, use the built-in Link Graph like this    
thank you it will include logs from all my products above?
I have a requirement where I want to see all users and their last login time, we are connected through Ldap so setting > users > last login time doesnot work.   I tried below query but it only show... See more...
I have a requirement where I want to see all users and their last login time, we are connected through Ldap so setting > users > last login time doesnot work.   I tried below query but it only shows lastest users not all. | rest /services/authentication/httpauth-tokens splunk_server=* | table timeAccessed userName splunk_server Also I want to know when a user was created on splunk as well, as users are created via LDAP  
@Amire22  Hello, you can install the Splunk Add-on for Microsoft Cloud Services add-on to onboard the logs to Splunk.  https://splunkbase.splunk.com/app/3110  https://lantern.splunk.com/Data_Descri... See more...
@Amire22  Hello, you can install the Splunk Add-on for Microsoft Cloud Services add-on to onboard the logs to Splunk.  https://splunkbase.splunk.com/app/3110  https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data  https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/ 
If your AppDynamics controller uses a self-signed SSL certificate, Splunk may fail to establish a connection due to certificate verification errors. A common fix is to import the certificate into the... See more...
If your AppDynamics controller uses a self-signed SSL certificate, Splunk may fail to establish a connection due to certificate verification errors. A common fix is to import the certificate into the Java keystore used by the controller or integration layer (like GlassFish). You can do this using the following command: keytool -import -alias appd-cert -keystore $JAVA_HOME/lib/security/cacerts -file /path/to/your/certificate.crt Make sure to restart the relevant service after importing the certificate.  I have found the resource via Google Search and this will help you: https://sslinsights.com/how-to-install-ssl-certificate-on-glassfish/
What are the options i have for diagram in dashboard studio? I have version 9.1.8 
I would appreciate help from anyone who has encountered a similar problem: We are using Microsoft's E5 licensing with the following products: Intune Entra ID Defender for endpoint office 365 ... See more...
I would appreciate help from anyone who has encountered a similar problem: We are using Microsoft's E5 licensing with the following products: Intune Entra ID Defender for endpoint office 365 teams All events from Microsoft are streamed to EventHub and from there to our Splunk ES We are very confused and don't know which Add-Ons we should install. I would love to hear from anyone who uses these technologies. Splunk Enterprise Security
You still need the timechart from your original search my query | rex field=_raw "Time=(?<NewTime>\d{4}\.\d+)" | eval TimeMilliseconds=(NewTime*1000) | timechart span=1d count as total, count(eva... See more...
You still need the timechart from your original search my query | rex field=_raw "Time=(?<NewTime>\d{4}\.\d+)" | eval TimeMilliseconds=(NewTime*1000) | timechart span=1d count as total, count(eval(TimeMilliseconds<=1000)) as "<1sec", count(eval(TimeMilliseconds>1000 AND TimeMilliseconds<=2000)) as "1sec-2sec" count(eval(TimeMilliseconds>2000 AND TimeMilliseconds<=5000)) as "2sec-5sec" count(eval(TimeMilliseconds>48000 )) as "48sec+", by msgsource | untable _time msgsource count | eval group=mvindex(split(msgsource,": "),0) | eval msgsource=mvindex(split(msgsource,": "),1) | eval _time=_time.":".msgsource | xyseries _time group count | eval msgsource=mvindex(split(_time,":"),1) | eval _time=mvindex(split(_time,":"),0) | table _time msgsource total *
As @ITWhisperer pointed out, your events don't seem to contain the action field directly nor its values. They must be then populated by means of knowledge objects, most probably from TA_nix. Intuitiv... See more...
As @ITWhisperer pointed out, your events don't seem to contain the action field directly nor its values. They must be then populated by means of knowledge objects, most probably from TA_nix. Intuitively it smells like some kind of permission issues but I'm not 100% sure about that.