All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2,... See more...
I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2, count=3, null()) | eval a3 = case(count=1, null(), count=2, "null", count=3, 3) | table a* | foreach mode=multifield * [eval <<FIELD>>=if(<<FIELD>>="null",null(),'<<FIELD>>')]     I have fields that contain a `.`. This screws up the `foreach` command. Is there a way to work around this? I have tried using `"<<FIELD>>"` but to no avail.  I think it would work to rename all "bad" names, loop trough them and rename them back, but if possible I would like to avoid doing this.      
  index=asa host=1.2.3.4 | fillnull src_sg_info value="No active VPN user" | timechart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E"  
Please how can I change from the default NT Authority to Local System account as the service Logon . I am trying to do this with Ansible 
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David ... See more...
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E"   This splunk code will give a list with active/logged on VPN user.  So far so good. So my question is following: howto  include empty src_sg_info into the same timechart and mark it as "No active VPN user"
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אות... See more...
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אותי"   Can someone help? Thanks  
Thanks 
Hi, I'm trying Splunk SOAR Community Edition, and I'm having an issue with the Elasticsearch app. I'm attempting to configure the asset with my Elasticsearch instance. The test connectivity is good... See more...
Hi, I'm trying Splunk SOAR Community Edition, and I'm having an issue with the Elasticsearch app. I'm attempting to configure the asset with my Elasticsearch instance. The test connectivity is good, but I can't poll incidents with "poll now." I encounter this type of error: Starting ingestion... If an ingestion is already in progress, this request will be queued and completed after that request completes. App 'Elasticsearch' started successfully (id: 1699519715123) on asset: 'elastic'(id: 4) Loaded action execution configuration Quering data for soar index Successfully added containers: 0, Successfully added artifacts: 0 1 action failed Unable to load query json. Error: Error Message: Expecting value: line 1 column 1 (char 0) However, when I use an action in a playbook with the command "run query," I can see data. Has anyone ever encountered this error ?
I have a Distributed Splunk environment and My Dashboard is linked to Services, and I made few changes in KPI Base Search and KPI Title, but it is not reflecting in the Dashboard, please suggest what... See more...
I have a Distributed Splunk environment and My Dashboard is linked to Services, and I made few changes in KPI Base Search and KPI Title, but it is not reflecting in the Dashboard, please suggest what need to be done.
I noticed that the dashboards in Splunk 9.1.0 open in new tab instead of the same tab. This wasn't the case in the previous versions of Splunk. Anyone knows why this change is added and how to make t... See more...
I noticed that the dashboards in Splunk 9.1.0 open in new tab instead of the same tab. This wasn't the case in the previous versions of Splunk. Anyone knows why this change is added and how to make the dashboards open in same tab using conf file changes? Any help is much appreciated. Thanks
| eval time_period= "01-Nov-23" | eval time_period_epoc=strptime(time_period,"%d-%b-%y") |where epoc_time_submitted <= time_period_epoc |join max=0 type=left current_ticket_state [|inputlookup month... See more...
| eval time_period= "01-Nov-23" | eval time_period_epoc=strptime(time_period,"%d-%b-%y") |where epoc_time_submitted <= time_period_epoc |join max=0 type=left current_ticket_state [|inputlookup monthly_status_state_mapping.csv|rename Status as current_ticket_state "Ageing Lookup" as state|table current_ticket_state state] |eval age= Final_TAT_days |eval total_age=round(age,2) |rangemap field=total_age "0-10days"=0-11 "11-20 Days"=11.01-20.00 "21-30 Days"=20.01-30 "31-40 Days"=30.01-40 "41-50 Days"=40.01-50 "51-60 Days"=50.01-60 "61-70 Days"=60.01-70 "71-80 Days"=70.01-80 "81-90 Days"=80.01-90 "91-100 Days"=90.01-100 ">100 Days"=100.01-1000 | stats count by work_queue state range | eval combined=work_queue."|".state | chart max(count) by combined range | eval work_queue=mvindex(split(combined,"|"),0) | eval state=mvindex(split(combined,"|"),1) | fields - combined |table work_queue state "11-20 Days" "21-30 Days" "31-40 Days" "41-50 Days" "51-60 Days" "61-70 Days" "71-80 Days" "81-90 Days" "91-100 Days" ">100 Days" |rename work_queue as "Owner Group" | fillnull value=0 |addtotals
Is there any way to disable the dashboard studio and classic dashboard help cards under dashboards tab through conf file changes?
Please help comment on below issue  Bug description: Option limit is not processed correctly for phantom.collect2 in phantom version 6.1.0 Reproduce in lab: testb = phantom.collect2(container=con... See more...
Please help comment on below issue  Bug description: Option limit is not processed correctly for phantom.collect2 in phantom version 6.1.0 Reproduce in lab: testb = phantom.collect2(container=container,tags=["test"], datapath=['artifact:*.name'],limit=0) phantom.debug(len(testb))   There are more than 6000 artifacts in test container However, phantom.collect2 can only return 1999 results even though we set limit=0 which means no limit   Nov 09, 11:19:01 : phantom.collect2(): called for datapath['artifact:*.name'], scope: None and filter_artifacts: None Nov 09, 11:19:01 : phantom.get_artifacts() called for label: * Nov 09, 11:19:01 : phantom.collect(): called with datapath: artifact:* / <class 'str'>, limit = 2000, scope=all, filter_artifact_ids=[] and none_if_first=False with trace:False Nov 09, 11:19:01 : phantom.collect(): calling out to collect_from_container Nov 09, 11:19:01 : phantom.collect(): called with datapath 'artifact:*', scope='all' and limit=2000. Found 2000 TOTAL artifacts Nov 09, 11:19:01 : phantom.collect2(): Classified datapaths as [<DatapathClassification.ARTIFACT: 1>] Nov 09, 11:19:01 : phantom.collect(): called with datapath as LIST of paths, scope='all' and limit=0. Found 1999 TOTAL artifacts Nov 09, 11:19:01 : 1999            
@bowesmana  am I misunderstanding what @Anud wants to achieve? For me it sounded like a simple lookup combined with a inputlookup.
I tested a fresh install of 9.1.1 using splunk documented installation procedure. It gave me 1400+ warnings with: warning: user splunk does not exist - using root warning: group splunk does not e... See more...
I tested a fresh install of 9.1.1 using splunk documented installation procedure. It gave me 1400+ warnings with: warning: user splunk does not exist - using root warning: group splunk does not exist - using root If i created splunk user and group first there was no warning. But now i both have splunk and splunkfwd user!! They should never have change that. It feels like there are no testing of the rpm before shipping them? The installation might be working, but how can this be the quality of the rpm? I logged another support case for this. They have confirmed that this is an unresolved issue.
After upgrading a distributed Splunk Enterprise environment from 9.0.5 to 9.1.1 a lot of issues observed. The most pressing one was the unexpected wiping of all input.conf and output.conf files from ... See more...
After upgrading a distributed Splunk Enterprise environment from 9.0.5 to 9.1.1 a lot of issues observed. The most pressing one was the unexpected wiping of all input.conf and output.conf files from heavy forwarders. All configuration files are still present and intact on the deployment server, though after unpacking the updated version and bringing Splunk back up on the heavy forwarders, all input/output files were wiped from all apps and are not being fetched from the deployment server. So none of them were listening for incoming traffic or forwarding to indexers. Based on previous experience, there is no way to "force push" configuration from the deployment server when all instances are "happy", which means manual inspection and repair of all affected apps. So now I am curious as to why this happened? If there was something wrong with the configuration I'd expect there to be some errors thrown and not just having the entire files deleted. Any input regarding why this happend, how to find out would be appreciated. UPDATE: So by now it is very clear what happened, a bunch of default folder were simply deleted afterduring the update, there are a few indications of this in different log files. 11-08-2023 12:21:19.816 +0100 INFO AuditLogger - Audit:[timestamp=11-08-2023 12:21:19.816, user=n/a, action=delete-parent,path="/opt/splunk/etc/apps/<appname>/default/inputs.conf" This was unfortunate as the deploymentclient.conf file was stored in <appname>/default and got erased together with almost all input/output.conf and a bunch of other things stored in the default folder. I don't get the impression that this is expected behaviour, so now I am curious regarding the cause of this highly strange outcome.
Hi, Had the same issue. Also check  busyKeepAliveIdleTimeout in server.conf, we had a value of 12 which did not match with the idle timeout of 60 in AWS ALB. Setting  value to 65 in server.conf s... See more...
Hi, Had the same issue. Also check  busyKeepAliveIdleTimeout in server.conf, we had a value of 12 which did not match with the idle timeout of 60 in AWS ALB. Setting  value to 65 in server.conf solved all HTTP502. Regards, ivo
Hi @krishna1, what's your search (please in text not screenshot)? what do you would have as result? Ciao. Giuseppe
Hi @vijreddy30, what do you meam with encrypt source data? are you speaking of encrypt the original files? it isn't a Splunk Matter, are you speaking of data transmission, which kind of ingestions... See more...
Hi @vijreddy30, what do you meam with encrypt source data? are you speaking of encrypt the original files? it isn't a Splunk Matter, are you speaking of data transmission, which kind of ingestions are you speaking about: forwarders, syslog, HEC? if Forwarders, you can excrypt data between Forwarders and Indexers and there are checking technics inside Splunk. If you're speaking of syslog: I hint to use an rsyslog server and read files using a Universal Forwarders; I'm not sure that's possible to encrypt syslogs; in addition, you could use two UFs and a Load Balancer to avoid Single Point of Failures, If you're speaking of HEC, you can use https and the token is a securization of your ingestion; as syslogs, you should use two Forwarders and a Load Balancer. if you're speaking of encryption on Splunk see at https://www.splunk.com/en_us/blog/learn/end-to-end-encryption.html?locale=en_us  Ciao. Giuseppe