All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

My architecture: F5 devices sending logs to our syslog server and we have UF installed on syslog server to forward the data to our splunk. But client wants to install LTM on our syslog server becaus... See more...
My architecture: F5 devices sending logs to our syslog server and we have UF installed on syslog server to forward the data to our splunk. But client wants to install LTM on our syslog server because sometimes logs are not coming properly... We use UDP as of now. But recommended is TCP for them. I am not aware of syslog configuration at all.
Your questions are very vague and it's very hard to tell what you have at this moment and what you're trying to achieve. Be a bit more descriptive about what is your current architecture and what is... See more...
Your questions are very vague and it's very hard to tell what you have at this moment and what you're trying to achieve. Be a bit more descriptive about what is your current architecture and what is your goal. We can help with specific technical questions or can explain something that you don't understand from docs or something like that but community volunteers are not a substitution for proper support or professional services.
Hi @PickleRick , Can you brief more about LTM and how to configure it with syslog? We are receiving data from F5 devices only. And please help me with syslog configuration with Splunk latest doc link
Ha! So it's a modular input. With modular inputs time processing works a bit differently. See https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsscript You nee... See more...
Ha! So it's a modular input. With modular inputs time processing works a bit differently. See https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsscript You need to configure your database input properly https://docs.splunk.com/Documentation/DBX/3.18.1/DeployDBX/Createandmanagedatabaseinputs or - if you can't find suitable combination of parameters - you need to use INGEST_EVAL to modify the _time field after initial parsing stages during ingestion.
@PickleRick  I am putting props setting under /app/local Note : Data is ingesting to Splunk from DB connect app. So I have applied all the props settings under /db_connect_ap/local
@yuanliuYou should normally not need to escape quotes. It's not a rex command in SPL. @uagraw01How are you ingesting your data and where do you put those props? (On which server?)
I have this same problem. If a container has multiple artifacts, for example 10, with the tagging duplicate actions are usually limited to 1-3 instead of 10. I haven't been able to find low level de... See more...
I have this same problem. If a container has multiple artifacts, for example 10, with the tagging duplicate actions are usually limited to 1-3 instead of 10. I haven't been able to find low level details about how the python scripts are executed at an interpreter/ingestion level, and I don't think it exists publicly, which is unfortunate because the power of the platform lies in being able to use python to efficiently process data.  The VPE makes this clunky. I spent 3-4years on Palo Alto's XSOAR as the primary engineer and for all its quirks, Palo Alto has produced way better documentation on their SOAR than Splunk (Palo Alto overhauled their documentation when they acquired Demisto).  I'm about a year into using Splunk SOAR, and for all the quirks I had to handle using Palo Alto's XSOAR I wish I could go back to it, maybe my opinion/preference will change, but unless Splunk produces better documentation and opens up to the public/community some lower level documentation I'm doubtful it will. Palo Alto's XSOAR has a feature called Pre-Processing rules which allows you to filter/dedup and transform data coming into the SOAR before playbook execution, I wish Splunk SOAR had something similar, that way ingestion/deduplication logic (if you can even call tagging "that") wouldn't be intermingled in the same area as the "OAR" logic of the playbook, and hopefully avoid race conditions. The problem with "Mulit-Value" lists is that it screws up pre-existing logic. Maybe I'm missing something, but that Option should be configurable in the in the Saved Search/Alert +Action  Splunk App for SOAR Export, so that it could be configured on a per alert basis. 6 Years ago I chose Demisto over Phantom working for a Fortune 300, if I could have my way right now I'd probably go with my first choice. P.S. to be fair to Splunk SOAR maybe there's some feature I'm overlooking.
No. CMC is pre-built and a far as I know there's no way to edit it from the user's level. Also, what would you want to "monitor" when you can't dispatch rest to indexers? If you want to just dig thro... See more...
No. CMC is pre-built and a far as I know there's no way to edit it from the user's level. Also, what would you want to "monitor" when you can't dispatch rest to indexers? If you want to just dig through the logs, you don't need CMC for that.
Interesting find. It's inconsistent with the docs so it calls for a support case or at least a docs feedback.
Hi @splunklearner , apply the checks that @dural_yyz hinted. In few words, check less the UF configuration and more the syslog configuration. Ciao. Giuseppe
Hi @esimon , use a regex (rex command) to extract the first part of the token. In othe words, if the token is "A-12345" and you want to use index="A-12345" and for the WHERE condition column="A", y... See more...
Hi @esimon , use a regex (rex command) to extract the first part of the token. In othe words, if the token is "A-12345" and you want to use index="A-12345" and for the WHERE condition column="A", you could try: index="$token$" | rex field="$token$" "^(?<my_field>[^-]*)" | where column="my_field" | ... But also the eval should run. Ciao. Giuseppe
LTM is an F5 product, not a part of Splunk environment. Also load-balancing syslog traffic can be a relatively complicated issue despite its initially perceived simplicity.
Dear Sir/Madam We have installed the on-premise version of AppDynamics with various agents in operational environment. We decided to update the controller (not agents). During the controller update,... See more...
Dear Sir/Madam We have installed the on-premise version of AppDynamics with various agents in operational environment. We decided to update the controller (not agents). During the controller update, we encountered with a problem and we had to reinstall the controller. So, the controller access key was changed. It takes much time to coordinate and  update the agents in operational environment and so we have not changed the agents. According to the link 'Change Account Access Key', we changed the new account access key (for Customer1 in single tenant mode) to the old account access key (without changing any config in agent side, including the access keys). Now, every agent is OK (e.g, app agents , db agents, etc.) but database collectors does not work. Although, database agent is registered but we can't add any database collector. I have checked the controller log and found the following exception: "dbmon config ... doesn't exist". It seems that the instructions mentioned in the link above are not enough for database agent and collector, namely some extra steps are needed. Thanks for your attention Best regards.
hello everyone I ran into a problem with Splunk UBA that I need help with. Thank you for guiding me. I have more than one domain in Splunk UBA and it mistakenly recognizes some users as the same use... See more...
hello everyone I ran into a problem with Splunk UBA that I need help with. Thank you for guiding me. I have more than one domain in Splunk UBA and it mistakenly recognizes some users as the same user due to name similarity. While these users are not the same person and only have name similarities in the login ids field. How can I solve this problem and have users with the same login ids but not have false positive anomalies? Thank you for your guidance.
Hi, I am new to Splunk admin. We have a syslog server in our environment to collect logs from our network device. Our clients asked us to install LTM (Local Traffic Manager) load balancer on syslog s... See more...
Hi, I am new to Splunk admin. We have a syslog server in our environment to collect logs from our network device. Our clients asked us to install LTM (Local Traffic Manager) load balancer on syslog server. I have no idea about what load balancer do and how to install it and is it a component of splunk(full package or light weight package). Please suggest how to setup this environment?  And also what is suggested for network logs... UDP or TCP?  I want to learn completely about syslog server and it's end to end configuration with Splunk. Please provide the latest doc link. (I am not asking about add-on). Please note.
@yuanliu You have not used both below attributes. Can I also skip these two? Not using it will not have any impact on the consistency of data parsing, right? TIME_FORMAT MAX_TIME_LOOKAHEAHD   ... See more...
@yuanliu You have not used both below attributes. Can I also skip these two? Not using it will not have any impact on the consistency of data parsing, right? TIME_FORMAT MAX_TIME_LOOKAHEAHD   Thanks in advance and acknowledging your valuable time.
Using your illustrated event as input, this is my test output Displayed event time is 11/7/24 6:29:43.175 PM, which matches EVENTTS value of 2024-11-07 18:29:43.175 and differs from the log's ti... See more...
Using your illustrated event as input, this is my test output Displayed event time is 11/7/24 6:29:43.175 PM, which matches EVENTTS value of 2024-11-07 18:29:43.175 and differs from the log's timestamp of 2024-11-07 18:45:00.035. This is my sourcetype entry: [test-eventts] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_PREFIX = EVENTTS=\" category = Custom description = https://community.splunk.com/t5/Splunk-Search/Timestamp-fixing/m-p/703912#M238560 pulldown_type = 1 The sourcetype is created from default except TIME_PREFIX. (Pro tip: Splunk's default timestamp detection is very versatile and often not worth overriding.)
if this is not available for us "sc_users" can the splunk engineer create a visualization into the CMC console? or any alternative ways grabbing the health of your application in the search head? ple... See more...
if this is not available for us "sc_users" can the splunk engineer create a visualization into the CMC console? or any alternative ways grabbing the health of your application in the search head? please advise, Thank you.
To fix this issue, we had our client insert the "Connection: Keep-Alive" header into the HTTP POST requests. This instructed the Splunk server to keep the connection alive.
I have dashboard in Splunk Cloud which uses a dropdown input to determine the index for all of the searches on the page, with a value like "A-suffix", "B-suffix", etc. However, now I want to add anot... See more...
I have dashboard in Splunk Cloud which uses a dropdown input to determine the index for all of the searches on the page, with a value like "A-suffix", "B-suffix", etc. However, now I want to add another search which uses a different index but has `WHERE "column"="A"`, with A being the same value selected in the dropdown, but without the suffix. I tried using eval to replace the suffix with an empty string, and I tried changing the dropdown to remove the suffix and do `index=$token$."-suffix"` in the other queries, but I can't get anything to work. It seems like I might be able to use `<eval token="token">` if I could edit the XML, but I can only find the JSON source in the web editor and don't know how to edit the XML with Dashboard Studio.