All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

what is the timezone your server is in. current_time takes current system time
The problem turned out to be that since I have the Add-on on a Heavy Forwarder and in this Splunk I only place the Forwarding license, it turns out that the DB Connect already needs something called ... See more...
The problem turned out to be that since I have the Add-on on a Heavy Forwarder and in this Splunk I only place the Forwarding license, it turns out that the DB Connect already needs something called KVStore or something like that works that is only available with a paid license. After asking for support I was provided with a free license and the problem was solved.
forwarder is forwarding (ex) /var/log/test.txt and the file IS test.txt and it is active because I can see the files from the search, except the dates are not feeding. props.conf is seating on the... See more...
forwarder is forwarding (ex) /var/log/test.txt and the file IS test.txt and it is active because I can see the files from the search, except the dates are not feeding. props.conf is seating on the /etc/apps/test/local/props.conf
I don't know what dataset you're working with but the first thing that comes to mind is that your datamodel is not accelerated. If you don't have accelerated summaries, tstats has nothing to operate ... See more...
I don't know what dataset you're working with but the first thing that comes to mind is that your datamodel is not accelerated. If you don't have accelerated summaries, tstats has nothing to operate on. And it's completely irrelevant if it's a docker image, VM or bare metal install.
Ok. First things first - where and how do you ingest the files and where do you have the props.conf with the DATETIME_CONFIG setting? And are you sure it is being active?
We have hundreds of servers in the environment running IIS. All servers have different logging levels set for IIS which causes different results in Splunk when searching our web index. We are looking... See more...
We have hundreds of servers in the environment running IIS. All servers have different logging levels set for IIS which causes different results in Splunk when searching our web index. We are looking for a solution to level set the logging level for IIS so we can build proper detections. Note - we also have this issue with apache logs. But we can concentrate on IIS for this. 
We need more info. Especially relevant configs.
If your event data includes a mixture of languages, a straightforward approach is to restructure your reference table. In this new setup, the first column should encompass all conceivable combination... See more...
If your event data includes a mixture of languages, a straightforward approach is to restructure your reference table. In this new setup, the first column should encompass all conceivable combinations of "Tools" values, including English. The second column should store the corresponding English translations for each item, denoted as "TranslatedTool." With this arrangement in place, you can effortlessly incorporate a lookup command into your search process or establish an automated lookup mechanism to generate the "TranslatedTool" field, featuring the accurate English translations.
In Step 2 "Add the Dataset" of "Create Anomaly Job" within the Splunk App for Anomaly Detection, when running the following SPL, we get the warning-        index=wineventlog_security | timechart ... See more...
In Step 2 "Add the Dataset" of "Create Anomaly Job" within the Splunk App for Anomaly Detection, when running the following SPL, we get the warning-        index=wineventlog_security | timechart count "Could not load lookup=LOOKUP-HTTP_STATUS No matching fields exist."         What can it be? We use the following versions - Splunk App for Anomaly Detection - 1.1.0 Python for Scientific Computing  - 4.1.2  Splunk Machine Learning Toolkit  - 5.4.0
Hi @yuanliu  The reason splunk is unable to extract the fields using in-built filed extraction becuase it only allows to extract either pipe, comma, space but not if the event contains multiple deli... See more...
Hi @yuanliu  The reason splunk is unable to extract the fields using in-built filed extraction becuase it only allows to extract either pipe, comma, space but not if the event contains multiple delimiters like pipe, space. Thank You for sharing the rex command. I have tried the rex command however, I have changed the log format to include additional details. I have  played with regex that you shared by making changes but however, it  extract only the portion of event_name. I believe it is mainly because of the new format that has a long event_name compared to previous log forma. For example,  abc-pendingcardtransfer-networki it only extracts "abc" as event name and I am trying to update the regex to exclude event  that starts with [INFO]  as it is not required.  (?<event_name>\w+)\|(?<task_id>\d+) (?<event_id>\d+)   New sample log format: abc-pendingcardtransfer-networki|30 77784791 1547 logs-incomingtransaction-datainpu|3 7876821 1458 [INFO] 2019-09-01 13:52:38.22 [main] Apache - Number of netwrok events is 25 dog-acceptedtransactions-incoming|1 746566 1887 sfgd_SGDJELE|2 0 0 es009874_e026516516|28 455255555 785            
When a playbook execution is complete, soar marks it as completed, what you have to do is call another playbook name, creating a dynamic variable that is added to your playbook name like this, accord... See more...
When a playbook execution is complete, soar marks it as completed, what you have to do is call another playbook name, creating a dynamic variable that is added to your playbook name like this, according to your loop, you will call playbook1, playbook2, playbook3, etc.
I open a ticket with VT.  Looks like current version have a bug that prevent the add-on to save configuration properly.  New version (1.6.1) will be release in the next days. 
Good afternoon, I am receiving a number of events in splunk soar from splunk, I have a playbook that is executed for each event, however I am wondering if the execution of the playbook in each event ... See more...
Good afternoon, I am receiving a number of events in splunk soar from splunk, I have a playbook that is executed for each event, however I am wondering if the execution of the playbook in each event is in sequence or if it executes simultaneously in each event. I need that when receiving 3 events, the playbook is executed first in 1, then in 2 and finally in three, and from what I've seen soar executes the playbook in disorder for example 3, 1, 2. I would appreciate if anyone has any information on this.
Interested in getting live help from a Splunk expert? Register here for our upcoming session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET.  This is your opp... See more...
Interested in getting live help from a Splunk expert? Register here for our upcoming session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET.  This is your opportunity to ask questions related to your specific ITSI challenge or use case, including: ITSI installation and troubleshooting, including Splunk Content Packs  Implementing ITSI use cases and procedures How to organize and correlate events Using machine learning for predictive alerting How to maintain accurate & up-to-date service maps Creating ITSI Glass Tables, leveraging performance dashboards (e.g., Episode Review), and anything else you’d like to learn! Check out Community Office Hours for a list of all upcoming sessions. Join the #office-hours user Slack channel to ask questions and join the fun (request access here). 
That documentation doesn't apply in this case.  It refers to the join command, which is not being used in the sample query.  The query uses the SQL JOIN operator, which is not the same as the SPL joi... See more...
That documentation doesn't apply in this case.  It refers to the join command, which is not being used in the sample query.  The query uses the SQL JOIN operator, which is not the same as the SPL join command. As I mentioned in my earlier response, everything in the dbxquery command is processed by the remote database, not by Splunk.  Splunk only sees the results of the query.
Hello, According to the following documentation, the limit is 50,000 rows. I am wondering if WHERE clause can fix this problem or it's the same. Can you take a look and see if it's accurate? Tha... See more...
Hello, According to the following documentation, the limit is 50,000 rows. I am wondering if WHERE clause can fix this problem or it's the same. Can you take a look and see if it's accurate? Thanks https://community.splunk.com/t5/Splunk-Search/How-to-join-large-tables-with-more-than-50-000-rows-in-Splunk/m-p/152136 https://docs.splunk.com/Documentation/SCS/current/SearchReference/JoinCommandOverview
Hi @Tanu.Sharma, Thanks for asking your question on the Community. While we wait for Community members to jump in. Feel free to explore the existing content on the Community (asked by other members... See more...
Hi @Tanu.Sharma, Thanks for asking your question on the Community. While we wait for Community members to jump in. Feel free to explore the existing content on the Community (asked by other members) regarding Splunk. https://community.appdynamics.com/t5/forums/searchpage/tab/message?filter=location&q=%22Splunk%22&noSynonym=false&inactive=false&location=category:Discussions&sort_by=-topicPostDate&collapse_discussion=true
AFAIK, the dbxquery command doesn't care what the query does.  It's up to the DB itself to interpret the SQL and decide how many rows to return.  Splunk's limit will be the same either way.
Hi @Manuel.Cunquera, Thanks for asking your question on the Community. I found this existing Knowledge Base article you can check out. https://community.appdynamics.com/t5/Knowledge-Base/How-do-... See more...
Hi @Manuel.Cunquera, Thanks for asking your question on the Community. I found this existing Knowledge Base article you can check out. https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-manage-Accounts-Management-Portal-users-as-an-Admin/ta-p/23286 Additionally, you can review our AppD Docs site for Accounts and User admin info: https://docs.appdynamics.com/accounts/en/global-account-administration
Just like the Splunk protocol is undocumented, so too is the compression method.  It may be a standard compression method or it may be proprietary like the protocol itself. Splunk recommends using t... See more...
Just like the Splunk protocol is undocumented, so too is the compression method.  It may be a standard compression method or it may be proprietary like the protocol itself. Splunk recommends using the compression available in SSL.