All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Current specs of the server. I have enabled the 4 inputs (mgmt, audit_general, share_point, audit_exchange) for Office365 management logs, and set the threads to 16 each input. My question i... See more...
Current specs of the server. I have enabled the 4 inputs (mgmt, audit_general, share_point, audit_exchange) for Office365 management logs, and set the threads to 16 each input. My question is, could it be that the current resource(threads) are not enough to pull the logs, as I have an alert which is telling me the latency of logs indexed are up to ~40mins Please correct me if I'm wrong, the current resource only provides 4core * 1thread = 4 threads to do the work.
For monitoring the JVM heap space we have health rules for the overall used% of the memory. This system works well but we have a few applications in which when routine jobs are being executed the old... See more...
For monitoring the JVM heap space we have health rules for the overall used% of the memory. This system works well but we have a few applications in which when routine jobs are being executed the old gen stays almost full for days with little space reclaimed up after each major GC and the new-gen keeps on being completely used and freed up between frequent GC cycles.  The heap space of such nodes gets used up to 97-98% at times before it is freed up and this creates a lot of unnecessary events on AppD.  How do we configure health rule for JVM heap space of such nodes so that fake alerts are minimized and OOM errors are prevented from occurring?
Hi, blank install of Splunk 8.0.2 and the web-frontend ist asking me every few seconds for my proxy username and pass. How can I prevent the http frontend connecting to the internet? Already ... See more...
Hi, blank install of Splunk 8.0.2 and the web-frontend ist asking me every few seconds for my proxy username and pass. How can I prevent the http frontend connecting to the internet? Already tryed to add [launcher] remote_tab = false to Splunk\etc\apps\launcher\default, but it didn't help. Thanks Alex
hi team, we have application with IBM WebSphere, but the user run service with command line and start stop the application from command line too, so i think i cant put the argument in the IBM WebSph... See more...
hi team, we have application with IBM WebSphere, but the user run service with command line and start stop the application from command line too, so i think i cant put the argument in the IBM WebSphere console.  Have any of you ever installed an agent on IBM WebSphere without using the IBM WebSphere console? where can I put the appdynamics argument? Thanks shandi aji p
User complained that following query is not displaying any events. index=main sourcetype=wms_oracle_sessions | bucket span=5m _time | stats count AS sessions by _time,warehouse,machine,program | s... See more...
User complained that following query is not displaying any events. index=main sourcetype=wms_oracle_sessions | bucket span=5m _time | stats count AS sessions by _time,warehouse,machine,program | search warehouse=wk | stats sum(sessions) AS psessions by _time,program | timechart avg(psessions) by program what could be the problem in the above query.
in rt index we have duplicates of the events but not finding same duplicates in summery indexes.Can some one tell me what might be the case?
hi I use the serch below wich match the data present in 2 indexes following by host In LastLogonBoot , the field "host" is well called "host" But in wire , the field "host" is in reality call... See more...
hi I use the serch below wich match the data present in 2 indexes following by host In LastLogonBoot , the field "host" is well called "host" But in wire , the field "host" is in reality calles "USERNAME" So i need to rename USERNAME by host in order to match the 2 indexes but it doenst works I have tried this : | rename USERNAME as host | eval host=if(index= wire , USERNAME,host) what is the problem please?? [| inputlookup host.csv | table host ] (`LastLogonBoot`) OR (`wire`) earliest=-24h latest=now | fields host SystemTime EventCode USERNAME NAME | lookup tutu.csv NAME as AP_NAME OUTPUT Building | eval SystemTime=strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9Q%Z'") | stats latest(SystemTime) as SystemTime by host EventCode | xyseries host EventCode SystemTime | rename "6005" as LastLogon "6006" as LastReboot | eval NbDaysReboot=round((now() - LastReboot )/(3600*24), 0) | eval LastReboot=strftime(LastReboot, "%y-%m-%d %H:%M") | lookup toto.csv HOSTNAME as host output SITE | stats last(LastReboot) as "Last reboot date", last(NbDaysReboot) as "Days without reboot", last(AP_NAME) as AP, last(SITE) as Site by host | sort -"Days without reboot"
Hello everyone, I'm faced with an issue of using Time Range Picker. When I put into search bar with this "sourcetype=sudo" and press enter, while leaving the Time Range Picker to default(past 24 ... See more...
Hello everyone, I'm faced with an issue of using Time Range Picker. When I put into search bar with this "sourcetype=sudo" and press enter, while leaving the Time Range Picker to default(past 24 hours), no data returns. However, it works if I issue this: sourcetype=sudo earliest=-24h Is there anything I lost attention to? I'm setting up a test environment with a trial version Splunk 8.0 ,with 2 search heads, 2 peer nodes and 1 UF. One of the peer nodes performs the role of heavy forwarder. \
hi all, is there any way to remove the line from the line chart and keep only the data labels without using css? I just want to highlight the points in my chart. Using scatter chart i was not abl... See more...
hi all, is there any way to remove the line from the line chart and keep only the data labels without using css? I just want to highlight the points in my chart. Using scatter chart i was not able to come to the result. can anyone have idea on this?
Hi All, I am trying to monitor the Azure activity logs, diagnostic logs and Metrics logs, when googled I came across Splunk Azure Monitor add-on and documents related to the add-on and Azure configur... See more...
Hi All, I am trying to monitor the Azure activity logs, diagnostic logs and Metrics logs, when googled I came across Splunk Azure Monitor add-on and documents related to the add-on and Azure configuration was found. Based on the steps mentioned in the documents, I had configured the Prerequisites in Azure side. Similarly in Splunk side when I was trying to install the add-on, splunk was throwing an error Unable to initialize modular input "azure_monitor_metrics" defined in the app "AzureMonitorAddonForSplunk-1.3.2": Introspecting scheme=azure_monitor_metrics: script running failed (exited with code 1).. To fix this, followed the below steps mentioned in this link, as per this when I was trying to fix the python dependency the system was throwing this error, could you please guide me to fix this issue. root@splunk-dev:/# pip install Markdown -q -t /opt/splunk/etc/apps/AzureMonitorAddonForSplunk-1.3.2/bin DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support WARNING: Target directory /opt/splunk/etc/apps/AzureMonitorAddonForSplunk-1.3.2/bin/markdown already exists. Specify --upgrade to force replacement. Software details: Splunk version : 7.3.0 Python version: 2.7.12 AzureMonitorAddonForSplunk-1.3.2 Links referred : https://www.splunk.com/en_us/blog/cloud/splunking-microsoft-azure-monitor-data-part-1-azure-setup.html https://www.splunk.com/en_us/blog/cloud/splunking-microsoft-azure-monitor-data-part-2-splunk-setup.html Dependency fix: https://github.com/microsoft/AzureMonitorAddonForSplunk/wiki/Installation-on-Linux Kindly guide me on the same to fix this issue.
Hello All, I am ingesting compressed(.gz) log files into Splunk by putting it in $SPLUNK_HOME/var/spool/splunk folder. (i.e. when I put the file in this location, Splunk's default batch input will... See more...
Hello All, I am ingesting compressed(.gz) log files into Splunk by putting it in $SPLUNK_HOME/var/spool/splunk folder. (i.e. when I put the file in this location, Splunk's default batch input will automatically ingest it in Splunk). when I put a file in this location, Splunk will calculate and maintain it's CRC value to identify the same file in the future. BUT, when I put a file with the same name but newer content appended at the end of the file, it prints the logs in splunkd.log like: 03-10-2020 21:10:03.588 +0530 INFO WatchedFile - **Will begin reading at offset=63969** for file='/opt/splunk8/splunk/var/spool/splunk/transaction-events-bfe8ae9a4041c5eaeea1663c583cbd54-72000-79200_0.gz'. 03-10-2020 21:10:13.589 +0530 INFO TailReader - Archive file='/opt/splunk8/splunk/var/spool/splunk/transaction-events-bfe8ae9a4041c5eaeea1663c583cbd54-72000-79200_0.gz' has stopped changing, will read it now. 03-10-2020 21:10:13.589 +0530 INFO ArchiveProcessor - Handling file=/opt/splunk8/splunk/var/spool/splunk/transaction-events-bfe8ae9a4041c5eaeea1663c583cbd54-72000-79200_0.gz 03-10-2020 21:10:13.590 +0530 INFO ArchiveProcessor - reading path=/opt/splunk8/splunk/var/spool/splunk/transaction-events-bfe8ae9a4041c5eaeea1663c583cbd54-72000-79200_0.gz (seek=63969 len=77924) So, According to the logs, Splunk should ingest only newer content of that file. But, when I search in Splunk, It is ingesting the whole file again instead of ingesting only newer content. Does anyone have any idea about this?
I have categories.csv that contains list of sub-categories in each category Category,Sub_category Biology,Botany Biology,Zoology Physical_Science,Physics Physical_Science,Chemistry In a... See more...
I have categories.csv that contains list of sub-categories in each category Category,Sub_category Biology,Botany Biology,Zoology Physical_Science,Physics Physical_Science,Chemistry In another file I have the results for all sub-categories Subject,Result Botany,Pass Zoology,Fail Physics,Being_revaluted Chemistry,Pass I need to compute the overall result per category like Biology,Physical_Science Fail,Being_revaluted Please help me achieve the above mentioned objective I am able to extract the list of sub-categories in any particular category, say Biology, using the below code source="/tmp/categories.csv" host="pc1" index="my_index" Category="Biology" | eval Sub_cat=Sub_category | search Category=Biology | table Category Sub_cat The above code gives Category,Sub_cat Biology,Botony Biology,Zoology
Hi all, I have a lookup like this. caseid date a 19-01-01 15:54:43.934000000 b 19-01-01 16:54:43.934000000 c 19-01-01 17:54:43.934000000 d ... See more...
Hi all, I have a lookup like this. caseid date a 19-01-01 15:54:43.934000000 b 19-01-01 16:54:43.934000000 c 19-01-01 17:54:43.934000000 d 19-01-01 18:54:43.934000000 e f g . . . . . I did this command | inputlookup test1 | eval date=strptime(date,"%y-%m-%d %H:%M:%S.%9N") | stats min(date) as starttime max(date) as endtime by caseid | eval diff =endtime-starttime | stats avg(diff) as average my result is like this.(this is dummy) average 999999.9999999 I want to get results like this. average 3600 (this is 1 hour) I understand I have to convert, but I don't know how to convert date differences to seconds. Could you help me?
For example index=active_directory | eventstats count by useraccount | search count=1 The above returning events for a unique field value of useraccount. What I am looking for is events ... See more...
For example index=active_directory | eventstats count by useraccount | search count=1 The above returning events for a unique field value of useraccount. What I am looking for is events with a unique user account grouped with several of another field value. Have tried transaction command to no avail. Pointing in the right direction is greatly appreciated.
Splunk Enterprise has a synchronization description for CB Defense, but it seems that Splunk Cloud doesn't have the right documentation. So it is difficult. Splunk Cloud wants to know if there is a w... See more...
Splunk Enterprise has a synchronization description for CB Defense, but it seems that Splunk Cloud doesn't have the right documentation. So it is difficult. Splunk Cloud wants to know if there is a way to work with CB Defense.
splunkuniversalforwarder: image: splunk/universalforwarder environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_FORWARD_SERVER=ops-splunk... See more...
splunkuniversalforwarder: image: splunk/universalforwarder environment: - SPLUNK_START_ARGS=--accept-license - SPLUNK_FORWARD_SERVER=ops-splunkhead02.dop.sfdc.net:9997 - SPLUNK_USER=root - SPLUNK_PASSWORD=xxxx ports: - 9997:9997 I store the log flie in /var/logs/serviceLog.log (Not in the container but in the local machine) I don't see the parameter to pass the file path;;; Seems like the splunk forwarder is running in the background and I just realized I never pass the log source variable to the container! Does anyone perhaps have an idea?
Hello Splunkers. I've downloaded in my pc an Splunk Trial for testing. I have installed it recently but throw prompt "Your license is expired". How I solved it? Version that installed is 7.3.1.1 ... See more...
Hello Splunkers. I've downloaded in my pc an Splunk Trial for testing. I have installed it recently but throw prompt "Your license is expired". How I solved it? Version that installed is 7.3.1.1 Thank you in advance. Happy Splunking!
I have IIS events which looks like below. looking to compute the total time taken from the splunk timestamp..which in this case is 3 secs..from 07th to :10th seconds how can i compute this from ev... See more...
I have IIS events which looks like below. looking to compute the total time taken from the splunk timestamp..which in this case is 3 secs..from 07th to :10th seconds how can i compute this from eval? 2020-03-11 22:29:10 /Logout Transaction:=InpatUPMC_090_Billing_WorklistLoad 2020-03-11 22:29:07 /Login Transaction:=InpatUPMC_090_Billing_WorklistLoad
Hello, Can I do a setup in Splunk so that I can write to two different S3 locations with SmartStore. For example: Write to On-Prem S3 solution To Amazon S3 Thank you advance. Regards,... See more...
Hello, Can I do a setup in Splunk so that I can write to two different S3 locations with SmartStore. For example: Write to On-Prem S3 solution To Amazon S3 Thank you advance. Regards, Bobby.
Hello, This is my query with " dedup Matricule" index=juniper_vpn (ID=AUT22673 OR ID=AUT24803) ......67 | eval src_user=upper(src_user) | join type=left user [| _accounts.csv |search domaine... See more...
Hello, This is my query with " dedup Matricule" index=juniper_vpn (ID=AUT22673 OR ID=AUT24803) ......67 | eval src_user=upper(src_user) | join type=left user [| _accounts.csv |search domaine="intra"| eval user=matricule ] | join type=left ua [|_dirigeant.csv | eval ua=UA ] | rename user as Matricule, cn as Nom, ua as UA | dedup Matricule | stats min(_time) as Firstdate max(_time) as Lastdate list(Nom) as Nom list(UA) as UA list(Matricule) as Matricule by src_user | convert timeformat="%d/%m/%Y %H:%M:%S" ctime(Firstdate) AS Firstdate, ctime(Lastdate) AS Lastdate | eval Date=if(Firstdate = Lastdate,"le ".Firstdate,"connecté entre le ".Firstdate." et le ".Lastdate) | stats count(Matricule) as "Total" list(Nom) as Nom list(Date) as "Date et heure de connexion" list(src_user) as Utilisateur by UA | table UA Nom Utilisateur "Date et heure de connexion" "Total" | addcoltotals labelfield=UA label="nombre total d'utilisateurs" "Total" you can see the results in the screenshot below. The results in the column "Date et heure de connexion" are not correct. But if I delete "dedup Matricule" from my query, the results obtained in the column "Date et heure de connexion" are correct but they are many as you can see on the second screenshot. How can I get only one value in the "Date et heure de connexion" column without dedup? Can you please help me?