All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Seems like it is a browser issue. This error is from Chrome, and I changed to Edge and no errors seen.
Just as a follow up with csv I definitely get an error. I get the error: Non-result: ERROR The lookup table 'not_really_my_lookup_name.csv' requires a .csv or KV store lookup definition.. Without .... See more...
Just as a follow up with csv I definitely get an error. I get the error: Non-result: ERROR The lookup table 'not_really_my_lookup_name.csv' requires a .csv or KV store lookup definition.. Without .csv I get the same error but *also*: Non-result: ERROR The lookup table 'not_really_my_lookup_name' is invalid..
I basically have a long playbook consisting of sub-playbooks. I have 5 artifacts in a container I am using, where 4 will be dropped via 4 different decision actions and posted to a Confluent topic. T... See more...
I basically have a long playbook consisting of sub-playbooks. I have 5 artifacts in a container I am using, where 4 will be dropped via 4 different decision actions and posted to a Confluent topic. The final artifact will make it through to the end of the playbook and also be posted in a Confluent topic. When I run each artifact individually, they work perfectly. However, when I try to run "all artifacts (5 in the container)" to simulate the artifacts coming in at the same time, they are each posted 5 times in the Confluent topic, totaling 25 instead of just 5. I have two hunches as to where the problem might be; one where the phantom.decision() is evaluating to True, despite only one artifact matching that criterion and just posting all 5 instead of 1 artifact. The other is that there is no "end" after my Post actions, so each artifact is being posted to Confluent, but then also continuing to the next Playbook against my intentions. I have no idea what is causing this and haven't found much in terms of documentation for my issue. I just find it annoying that they will work perfectly fine individually but the opposite when called together. This might be how it is designed to be, or probably that I'm doing something simply incorrectly, but any help regarding this would be greatly appreciated!
are you looking for this one...  https://splunkbase.splunk.com/app/3283 or, check the other two apps... https://splunkbase.splunk.com/apps?keyword=HL7  
Thanks
Where can I find the HL7 add on for Splunk? We created a solution around this for healthcare field. We now have an official go ahead for a POC with Splunk in Asia. We need HL7 add on. Can you pleas... See more...
Where can I find the HL7 add on for Splunk? We created a solution around this for healthcare field. We now have an official go ahead for a POC with Splunk in Asia. We need HL7 add on. Can you please help us? Thanks, Sanjay
As @mmccul_slac says, Indexed Extractions=true is what causes this behaviour. When JSON data comes in, if it's set to true, Splunk will parse and index the JSON data and when you search, Splunk will ... See more...
As @mmccul_slac says, Indexed Extractions=true is what causes this behaviour. When JSON data comes in, if it's set to true, Splunk will parse and index the JSON data and when you search, Splunk will also parse and create fields from the JSON at search time, hence you get duplicates. See this  https://community.splunk.com/t5/Getting-Data-In/Why-is-my-sourcetype-configuration-for-JSON-events-with-INDEXED/td-p/188551 and it may depend on where the data is coming from to HEC and whether it's coming from an intermediate Splunk Universal forwarder
Have you configured your forwarder firstly to collect data from the host and secondly where to send it? https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Configuretheuniversalforwarder... See more...
Have you configured your forwarder firstly to collect data from the host and secondly where to send it? https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Configuretheuniversalforwarder https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Configureforwardingwithoutputs.conf#:~:text=conf-,The%20outputs.,require%20that%20you%20edit%20outputs. Have you created an index that the UF will send its data to?
You're most of the way there -- In your original search, replace the date you have with [] and put your make results in it. The items in the brackets run before the remainder of the search.  | ldaps... See more...
You're most of the way there -- In your original search, replace the date you have with [] and put your make results in it. The items in the brackets run before the remainder of the search.  | ldapsearch search="(&(objectClass=user)(whenChanged>=[|makeresults |eval whenChanged=strftime(relative_time(now(),"-2d@d"),"%Y%m%d%H%M%S.0Z")|return $whenChanged])(!(objectClass=computer)))" |table cn whenChanged whenCreated  
I forgot to tell you what my inputs.conf contains:   [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 [WinEventLog://Setup] disa... See more...
I forgot to tell you what my inputs.conf contains:   [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 [WinEventLog://Setup] disabled = 0   My outputs.conf: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = 192.168.1.2:9997 [tcpout-server://192.168.1.2:9997]  
Hello, I try to learn splunk and thatfor I have setup a demo-version in my home-lab on the Linux system... Actually I have splunk running and I added the local files. Then I activated port 9997 and ... See more...
Hello, I try to learn splunk and thatfor I have setup a demo-version in my home-lab on the Linux system... Actually I have splunk running and I added the local files. Then I activated port 9997 and installed a universal forwarder on my Windows 10 PC. I can see on Linux with tcpdump that I get packages on port 9997 but I can't get the data into splunk! When I try to add data from a forwarder manually then I see the message that I have actually not forwarders configured... What am I doing wrong?
Hi Shawno: The Palo Alto App and Add on are supported by Palo Alto and they're very happy to work with folks on getting the app to work. I recommend you use your power as a PA customer to reach out ... See more...
Hi Shawno: The Palo Alto App and Add on are supported by Palo Alto and they're very happy to work with folks on getting the app to work. I recommend you use your power as a PA customer to reach out to them for more specific help. If you'd like this community to help, you will need to be more specific, like telling us what dashboard and whether there is more info about the error (is there an actual context to the error or does it just say "error"). See if you can give a bit more info here and then people will be able to help out. Also you have an error on a dashboard are also talking about data no longer ingesting. Are you sure? Is the data there if you just say index=blah sourcetype=blah... ? Or is it that the error is stopping data from po
A custom JavaScript error caused an issue loading your dashboard. I'm experiencing this error on both the palo alto networks app and add-on app; I'm unsure why reporting is no longer ingesting data.... See more...
A custom JavaScript error caused an issue loading your dashboard. I'm experiencing this error on both the palo alto networks app and add-on app; I'm unsure why reporting is no longer ingesting data. Thanks    
I need to run a daily ldap search that will grab only the accounts that have change in the last 2 days. I can hard code a data into the whenChanged attribute.          | ldapsearch search="(&(ob... See more...
I need to run a daily ldap search that will grab only the accounts that have change in the last 2 days. I can hard code a data into the whenChanged attribute.          | ldapsearch search="(&(objectClass=user)(whenChanged>=20230817202220.0Z)(!(objectClass=computer)))" |table cn whenChanged whenCreated         I am trying to make whenChanged into a last 2 days variable that will work with ldap search.  I can create a whenChanged using:       |makeresults |eval whenChanged=strftime(relative_time(now(),"-2d@d"),"%Y%m%d%H%M%S.0Z")|fields - _time         I could use the help getting that dynamic value into the ldap search so that I am looking for the >= values of whenChanged
You'd need to use btool to check at the OS level for any configs for that source and sourcetype, e.g.,  splunk btool props list RanorexJSon splunk btool props list source::ElectraExtendedUI (Make s... See more...
You'd need to use btool to check at the OS level for any configs for that source and sourcetype, e.g.,  splunk btool props list RanorexJSon splunk btool props list source::ElectraExtendedUI (Make sure to get the sourcetype and source names accurate).  You're looking for parameters about indexed extractions.  Since a props can apply to both a sourcetype and a source (as well as host, but that's less likely), search for both.
Thank you your input, I found one workaround and here is the code. Luckily I am having timestamp field in my lookup file so I making use of that. If you have any idea to make it better please let m... See more...
Thank you your input, I found one workaround and here is the code. Luckily I am having timestamp field in my lookup file so I making use of that. If you have any idea to make it better please let me know | inputlookup lkp_sds_wms_trw_slislo.csv | eval start_date = strftime(relative_time(now(),"-60d@d"), "%Y-%m-%d") | eval Endtimestamp = strptime(start_date, "%Y-%m-%d") |where timestamp > Endtimestamp | outputlookup lkp_sds_wms_trw_slislo.csv   Thanks Again   Regards Amit
it's EST. also that is not the problem. it's the date also.   docuemtn name is 09042023_test.txt and inside it has something like ID= 101010 processed_date=09/03/2023   and today's date is 09... See more...
it's EST. also that is not the problem. it's the date also.   docuemtn name is 09042023_test.txt and inside it has something like ID= 101010 processed_date=09/03/2023   and today's date is 09/05/2023     but when the forwarder forwards, it takes the date inside of the document resulting the search has to go 2 days back to find the data
Just add any additional character groupings into the allowed character ranges, i.e. | rex field=Group max_match=0 "'(?<g>[A-Za-z_\.]+)':'" | rex field=Value max_match=0 "'(?<v>[A-Za-z_\.]+)'"
TCP has it's own compression standards same is applied here.  
Are you monitoring the path where the logs are written in the UF. can you share your inputs.conf ? this will help you check further.