All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have data like below Drive Free_Space C:,D: 500 GB,450 GB E:,D: 250 GB,150 GB C:,E: 250 GB,1 TB S:,D:,C:,G 120 GB,4... See more...
Hi All, I have data like below Drive Free_Space C:,D: 500 GB,450 GB E:,D: 250 GB,150 GB C:,E: 250 GB,1 TB S:,D:,C:,G 120 GB,450 GB,250 GB,800 GB I want output as Please help me for this urgent Thank you
Hello, this is my request index=juniper_vpn ID=AUT24803 ( src_user!=ANONYMOUSUSER*) | eval src_user=upper(src_user) | eval lmt=min(_time), date=max(_time) | convert timeformat="%d/%m/%Y... See more...
Hello, this is my request index=juniper_vpn ID=AUT24803 ( src_user!=ANONYMOUSUSER*) | eval src_user=upper(src_user) | eval lmt=min(_time), date=max(_time) | convert timeformat="%d/%m/%Y %H:%M:%S." ctime(lmt) AS lmt, ctime(date) AS date | join type=left user [| inputlookup accounts.csv |search domaine="intra"| eval user=matricule ] | join type=left ua [| inputlookup dirigeant.csv| eval ua=UA ] | rename user as Utilisateur, cn as Nom, ua as UA, samaccountname as Dirigeant | dedup Utilisateur | rename src_user as Matricule | stats count(Utilisateur) as "Total", list(Nom) as Nom, list(date) as fin_cnx, list(lmt) as debut_cnx, list(Matricule) as Matricule by UA | table UA Nom Matricule debut_cnx fin_cnx Total | sort UA | addcoltotals labelfield=UA label="nombre total d'utilisateurs" Total You can see my results in picture. I'd like to have the start date of the connection in the "debut_cnx" column and the end date of the connection in the "fin_cnx" column. Can you help me please.
I have installed the Sophos on Add for Splunk (https://splunkbase.splunk.com/app/4096/ ) on HF I am able to receive the events perfectly but i get the below error when i configure it to pull alert... See more...
I have installed the Sophos on Add for Splunk (https://splunkbase.splunk.com/app/4096/ ) on HF I am able to receive the events perfectly but i get the below error when i configure it to pull alerts: 2020-03-05 11:52:19,263 ERROR pid=176598 tid=MainThread file=base_modinput.py:log_error:307 | {"has_more":false,"next_cursor":"xxxxxxxxLTAzLTA1VDEwOjUyOjE5LjIwM1o=","items":[]}
Hello, I want to install a Splunk forwarder in an EC2 to collect AWS metrics but i have a problem. I configured AWS_STS_REGIONAL_ENDPOINTS=regional environment variable to use regional endpoin... See more...
Hello, I want to install a Splunk forwarder in an EC2 to collect AWS metrics but i have a problem. I configured AWS_STS_REGIONAL_ENDPOINTS=regional environment variable to use regional endpoint instead of generic sts.amazonaws.com . The goal is to call private VPC endpoints to speak with AWS services. But I can't find a proper way to configure sts region to use. Default region seems to be hard coded in DEFAULT_REGION (/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/credentials.py). For now I use sed -i 's/us-west-1/eu-west-1/g' /opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/credentials.py to fix my problem but is it very ugly. Do you know a proper way to do that ? Maybe a feature request ? Thanks
Hello everyone, I have the challenge to compare two date fields, one coming from a search and the other one is reported in a lookup table. Of these two dates I would like to get the most recent one.... See more...
Hello everyone, I have the challenge to compare two date fields, one coming from a search and the other one is reported in a lookup table. Of these two dates I would like to get the most recent one. Let's say that the search can give me the "last time a specific user came into our shop" and in the lookup table instead we report "last time the user bought something in our shop" Assuming that the two dates always exist, I would like to get the most recent of the twos. Appreciated your help.
What's the difference between authentication using ProxySSO and authentication using SSO with reverse proxy? They sound similar and even the configurations look similar. Authentication using Pr... See more...
What's the difference between authentication using ProxySSO and authentication using SSO with reverse proxy? They sound similar and even the configurations look similar. Authentication using Proxy SSO: https://docs.splunk.com/Documentation/Splunk/latest/Security/AboutProxySSO Authentication using single sign-on with reverse proxy: https://docs.splunk.com/Documentation/Splunk/latest/Security/HowSplunkSSOworks
Hello, I'm trying to force an app to use python 2.7 on a Splunk 8 with enterprise security. The config in server.conf is set to: python.version = python3 With this setting my app doesn't w... See more...
Hello, I'm trying to force an app to use python 2.7 on a Splunk 8 with enterprise security. The config in server.conf is set to: python.version = python3 With this setting my app doesn't work anymore, if I change the server.conf to: python.version = python2, it works. But I would like to keep python3 in server.conf and force the app to use python2, I tried to add the following in the app.conf but it doesn't work: [install] python.version = python2 Anyone knows how to force the app to use python 2 ? Thank you !
Hi Ninjas! Do you know how to change text size in export > pdf? I have a dashboard with 4 panels, 2 of them are choropleth maps with legends (Regions on the map). When I export dashboard, text size ... See more...
Hi Ninjas! Do you know how to change text size in export > pdf? I have a dashboard with 4 panels, 2 of them are choropleth maps with legends (Regions on the map). When I export dashboard, text size in those panels is huge, it's covering the whole map. Therefore I wanted to know, if it's possible to change the size of the text.
I have this add-on installed on a Heavy Forwarder and I seem to be getting stuck with the proxy configuration for the app. On my heavy forwarder I do have a proxy configuration for the whole system ... See more...
I have this add-on installed on a Heavy Forwarder and I seem to be getting stuck with the proxy configuration for the app. On my heavy forwarder I do have a proxy configuration for the whole system (under /etc/profile.d/proxy.sh) that has my proxy configurations for when I need to go out via the proxy. As part of this configuration I have added a "no_proxy" entry for internal traffic. I attempted to add in an exclusion for ".telemetry.microsoft.com" so that the app would force the traffic through the firewall to allow for the smq.telemetry.microsoft.com link that it makes via TCP. However it appears that the python script makes the assumption that "if http_proxy" exists use it and consequently seems to ignore the fact that the URL has been excluded. I also do not have the proxy configuration enabled o (or configured) in the TA. Is there a way to force this app to honor the proxy configuration on the system so that its not assuming proxy and forcing it. When it goes through proxy it then forces a redirect to login.microsoftonline.com with logins etc. The api, tenant id and secret key work as I have tested this on a less restricted server.
first a log sample: {"offset":44469279,"messages":"<190>Mar 5 2020 06:40:55 WH-USG-MAIN %%01POLICY/6/POLICYPERMIT(l):vsys=public, protocol=6, source-ip=172.16.174.2, source-port=9054, destination-... See more...
first a log sample: {"offset":44469279,"messages":"<190>Mar 5 2020 06:40:55 WH-USG-MAIN %%01POLICY/6/POLICYPERMIT(l):vsys=public, protocol=6, source-ip=172.16.174.2, source-port=9054, destination-ip=10.251.30.14, destination-port=443, time=2020/3/5 14:40:55, source-zone=dmz, destination-zone=trust, rule-name=GRE.\u0000","fields":{"service":"network-log"},"client_ip":"10.251.0.254","time":"2020-03-05 14:41:20","prospector":{"type":"log"},"source":"/data/network/logs/network/buffer.b5a015d0cd6da0203206d47dc21494bdb.log","@timestamp":"2020-03-05T06:41:20.000Z","beat":{"version":"6.2.4","hostname":"network-log-input","name":"network-log-input"}} i want to extract ,"time":"2020-03-05 14:41:20" this part for my indexed time _time field you can see my sourcetype config like blow: but i can't get this time , still use the server local time for the _time field.
Hi Currently I have some JSON files in this structure : { { Meta: .... } { Data: A, B: [ {key: value_b1}, {key:value_b2} ... ] } In order to show the nested da... See more...
Hi Currently I have some JSON files in this structure : { { Meta: .... } { Data: A, B: [ {key: value_b1}, {key:value_b2} ... ] } In order to show the nested data properly, the JSON is transformed such each nested data is given individual Meta tags: { { Meta: .... } { Data: B: {key:value_b1} } } { { Meta: .... } { Data: B: {key:value_b2} } } But this resulted in around 200,000 events per JSON file which slows down the dashboard searches, may I know what will be a good way to keep it to 1 events per JSON file while retaining the nested data structures? Thank you
Hi, is there anyway to pull a report to get the data of log where DEBUG level log is enabled, based on the index and sourcetype? I am using below query to get the data: index=myapp sourcetype... See more...
Hi, is there anyway to pull a report to get the data of log where DEBUG level log is enabled, based on the index and sourcetype? I am using below query to get the data: index=myapp sourcetype=ejb_app_logs DEBUG | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by s useother=false
Hi all, i have been trying to use 2 tokens which are calculated from 2 different files in another query. But it is not working. Can i use 2 tokens from different files in one query or is there any al... See more...
Hi all, i have been trying to use 2 tokens which are calculated from 2 different files in another query. But it is not working. Can i use 2 tokens from different files in one query or is there any alternative way to achieve this?
Which port will be used to integrate Tanium and Splunk using "Tanium Splunk Application" add-on, inorder to get the data from Tanium to Splunk Enterprise?
i would like one user to edit xml code or change query on the panels/dashboard of his app alone. what is the capabilities to choose for enabling this for the user. and how to apply this only to ... See more...
i would like one user to edit xml code or change query on the panels/dashboard of his app alone. what is the capabilities to choose for enabling this for the user. and how to apply this only to a particular app.
Hi, I have set up 4 forwarders to communicate with my indexer. I already executed add forward-server and set deploy-poll and created the outputs.conf from deployment-apps. On the forwader managem... See more...
Hi, I have set up 4 forwarders to communicate with my indexer. I already executed add forward-server and set deploy-poll and created the outputs.conf from deployment-apps. On the forwader management, I can only see 3 instead of 4 forwarders available. I checked the _internal logs and got the following errors related to ShutdownHandler and TcpOutputProc and TcpOutputFd. Below are sample logs that I got: 03-03-2020 13:19:44.312 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_ArchiveAndOneshot" host = xxx source = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkd 3/3/20 1:19:44.312 PM 03-03-2020 13:19:44.312 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_SyslogOutput" host = xxx source = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkd 3/3/20 1:19:44.312 PM 03-03-2020 13:19:44.312 -0500 INFO TcpInputProc - TCP connection cleanup complete host = xxx source = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkd 3/3/20 1:19:44.312 PM 03-03-2020 13:19:44.312 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_Scheduler" host = xxx source = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkd 3/2/20 9:16:43.406 AM 03-02-2020 09:16:43.406 -0500 WARN TcpOutputFd - Connect to xxx:9997 failed. Connection refused host = xxx source = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkd 3/2/20 9:16:43.042 AM 03-02-2020 09:16:43.042 -0500 INFO TcpOutputProc - Connection to xxx:9997 closed. Connection closed by server. host = xxx source = /opt/splunkforwarder/var/log/splunk/splunkd.logsourcetype = splunkd My guess is is being force to shutdon by the server or admin. Tho i am not sure. Does anyone experienced this? Any help is appreciated.
Hi, Is it possible to set your coldToFrozenDir path to a UNC path? e.g \server\share\ ? I cant find any info on it except one answer where the admin said it didnt work. I could map a netw... See more...
Hi, Is it possible to set your coldToFrozenDir path to a UNC path? e.g \server\share\ ? I cant find any info on it except one answer where the admin said it didnt work. I could map a network share as drive and give it a drive letter ..e.g S: , but i'm not convinced that'll work once I log off. How does everyone generally configure their coldToFrozenDir ? Thanks
I'm getting logs from my Dell switches, but I can't find a suitable addon. Is there a current Dell or other addon that works for Dell networking n2000 / n4000 gear?
From here I know that there are two ways to update the MMDB, which is used by iplocation . https://www.splunk.com/en_us/blog/tips-and-tricks/updating-the-iplocation-db.html My MMDB file is update... See more...
From here I know that there are two ways to update the MMDB, which is used by iplocation . https://www.splunk.com/en_us/blog/tips-and-tricks/updating-the-iplocation-db.html My MMDB file is updated monthly. My question is: do I need to restart my indexer after the file upgrade? Or is there a better way?
Hi, i have an event having white spaces in between i want to trim it the data is coming from db connect i don't have access so i need o done it in search head Can some one help me in fixin... See more...
Hi, i have an event having white spaces in between i want to trim it the data is coming from db connect i don't have access so i need o done it in search head Can some one help me in fixing this