All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So, at the moment, I want to import log files which were copied from the remote server to my Windows PC. I want to import all of the proxy and AS logs and they all have the same SystemOut.log name. ... See more...
So, at the moment, I want to import log files which were copied from the remote server to my Windows PC. I want to import all of the proxy and AS logs and they all have the same SystemOut.log name. What's the proper way to do that?  (I could rename each filename so that they are unique - but I'm sure there's a better way).   Searching the forum, I see some info regarding getting the live files, but not for an import.   Thanks!
All my Splunk Alerts got disabled due to a wrong config , I want to get the list of enabled alerts prior to the issue for me to re-enable them manually.   Any pointers how can i get the list of ena... See more...
All my Splunk Alerts got disabled due to a wrong config , I want to get the list of enabled alerts prior to the issue for me to re-enable them manually.   Any pointers how can i get the list of enabled alerts prior to the issue ( i am aware of the date the wrong config went into system).
Hello fellow splunkers,   I would like to ask you something regarding the function that most of the alerts take to find outliers and so on, I was actually trying to find some information on my own,... See more...
Hello fellow splunkers,   I would like to ask you something regarding the function that most of the alerts take to find outliers and so on, I was actually trying to find some information on my own, but I never got a good explanation, basically, I am focusing on understanding the following, let's take as an example this query:   tag=email | search src_user=*@mycompany.com| bucket _time span=1d | stats count by src_user, _time | stats count as num_data_samples max(eval(if(_time >= relative_time(now(), "-1d@d"), 'count',null))) as recent_count avg(eval(if(_time<relative_time(now(),"-1d@d"),'count',null))) as avg stdev(eval(if(_time<relative_time(now(),"-1d@d"),'count',null))) as stdev by src_user| where recent_count> (avg+stdev*2) AND num_data_samples>=7 What I am trying to understand is the purpose of the two continuing stats, I was reading that the first one is for obtaining a general number per day and the other one is for detecting how many times on that day the given user has been detected, but honestly, I had no a success, so I am kind of interested in understanding whether this due to this is like the base of having a good baseline. Thanks so much,
We   would like to remove EBS volumes which were used for cold store and DM summary Docs is not overly clear on the recommended approach https://docs.splunk.com/Documentation/Splunk/7.3.4/Indexer/Mig... See more...
We   would like to remove EBS volumes which were used for cold store and DM summary Docs is not overly clear on the recommended approach https://docs.splunk.com/Documentation/Splunk/7.3.4/Indexer/MigratetoSmartStore. 
I have a search running fine by itself,   index=indexA user=ABC123 | where isnotnull(USER_NAME_FROM_ACEE) | table USER_NAME_FROM_ACEE | dedup USER_NAME_FROM_ACEE | return $USER_NAME_FROM_ACEE ... See more...
I have a search running fine by itself,   index=indexA user=ABC123 | where isnotnull(USER_NAME_FROM_ACEE) | table USER_NAME_FROM_ACEE | dedup USER_NAME_FROM_ACEE | return $USER_NAME_FROM_ACEE   but if I put the search as a subsearch in if statement as below   | eval unc=mvcount(user_num ) | eval actual_user=if((unc!=1), [ index=indexA user=ABC123 | where isnotnull(USER_NAME_FROM_ACEE) | table USER_NAME_FROM_ACEE | dedup USER_NAME_FROM_ACEE | return $USER_NAME_FROM_ACEE ], user) | table actual_user   it will throw me the errro ""Error in 'eval' command: The expression is malformed. An unexpected character is reached at ') , user)'. I did test to simplify the search and  find the problem is the filed name part"USER_NAME_FROM_ACEE" if I do   | eval unc=mvcount(user_num ) | eval actual_user=if((unc!=1), [ index=indexA user=ABC123 | table user ], user) | table actual_user   it works fine, but if I do   | eval unc=mvcount(user_num ) | eval actual_user=if((unc!=1), [ index=indexA user=ABC123 | table USER_NAME_FROM_ACEE ], user) | table actual_user   it will throw me the error, which totally does not make sense to me, any suggestion why it is like this?  
Did anyone sent the messages from slack channels to splunk? looking for the solution i have used slack app for splunk add-on installed on my SH. Created a custom app in slack(api.slack.com) - added... See more...
Did anyone sent the messages from slack channels to splunk? looking for the solution i have used slack app for splunk add-on installed on my SH. Created a custom app in slack(api.slack.com) - added user scope as channels:history and generate Oauth token In splunk side, i have created an input under slack:messages and provided the index, token and initial days to load as 30, but i am not getting any results.  did anyone ever tried this integration?   thanks
Hello All, i have a request where users will add their data(csv) manually every day. we are using splunk cloud version and which capabilities needs to be provisioned for this users.I found edit_moni... See more...
Hello All, i have a request where users will add their data(csv) manually every day. we are using splunk cloud version and which capabilities needs to be provisioned for this users.I found edit_monitor in few solutions but this capability is not found in splunk cloud. Is this the same as edit_upload_and_index. Do any other capabilities also needs to be added? waiting for the help.   thanks
Hello, I am looking for a simple query that would provide me the completion percentage of the data migration from On-prem environment to Cloud. Please see the example below.   
Hi Team, I wanted to ask if it is possible to create one pdf file from two dashboards using one rest API call or multiple API calls. I tried individually and they work as below: curl -u user:passw... See more...
Hi Team, I wanted to ask if it is possible to create one pdf file from two dashboards using one rest API call or multiple API calls. I tried individually and they work as below: curl -u user:password -k https://localhost:8089/services/pdfgen/render?input-dashboard=health_overview >> test1.pdf       (test1.pdf was created successfully) curl -u user:password -k https://localhost:8089/services/pdfgen/render?input-dashboard=sales_dashboard >> test2.pdf         (test2.pdf was created successfully) Now, I have tried the below but none of them seems to work. I am certainly doing something terribly wrong.  curl -u user:password -k https://localhost:8089/services/pdfgen/render?input-dashboard=health_overview >> test3.pdf curl -u user:password -k https://localhost:8089/services/pdfgen/render?input-dashboard=sales_dashboard >> test3.pdf (The result is that only sales_dashboard data is available and over writes health_overview) Then tried this : curl -u user:password -k https://localhost:8089/services/pdfgen/render -d input-dashboard=health_overview -d input-dashboard=sales_dashboard_clone_1 >> okdokey.pdf (The second paramerter(sales_dashboard) overwrites the first written pdf)   Then I tried this: curl -u user:password -k https://localhost:8089/services/pdfgen/render?input-dashboard=sales_dashboard_clone_1?input-dashboard=health_overview >> testClone2.pdf (And Of course, this was a terrible stupidity and didn't work as expected and threw error. ) I don't want to use any python external module (like pyPDF and pdfwriter) to combine the PDFs. Any help/workaround would be appreciated. Any out of the box technique would be highly appreciated. Thanks In Advance! Regards, Abhishek Singh
What command would I use to check if anyone has downloaded a large file(s) before they were terminated?
How can I integrate Qualys Continuous Monitoring (CM)  Module with Splunk ? Our requirement is to get the alerts generated by CM to be ingested / visible In Splunk. Splunk will then generate an inc... See more...
How can I integrate Qualys Continuous Monitoring (CM)  Module with Splunk ? Our requirement is to get the alerts generated by CM to be ingested / visible In Splunk. Splunk will then generate an incident accordingly for any alerts received by CM.    In this following ppt ( https://Ten-Inc.Com/Presentations/Qualys-Continuous-Monitoring-Protect-Global-Perimeter.Pdf ) On Page 11 it talks about integrating  (Cont. Monitoring) Module With Splunk but they have not provided any details. We have installed the Qualys TA In Splunk, can see the vulnerability data coming in to our Splunk and tried creating some searches . But the search results do not contain detailed information for us to trigger an alert off it.   I have  also looked at the  document:  https://www.Qualys.Com/Docs/Qualys-Ta-For-Splunk.Pdf  ), It does not say anything about CM Module .   Please Advise 
I installed the Splunk Add-On for AWS on my HF and created an input with a custom data type to ingest the AWS instance logs (basically Linux and Windows event logs), with a custom sourcetype of aws:s... See more...
I installed the Splunk Add-On for AWS on my HF and created an input with a custom data type to ingest the AWS instance logs (basically Linux and Windows event logs), with a custom sourcetype of aws:s3:hostOS, but the problem is when I search the logs the timestamps are showing up as +4. I'm EST and the OS logs are GMT. Do I need to modify the  props.conf on my HF to adjust for GMT, or on the SH cluster, or on both the HF and SH cluster? Thx
I have two query i want to get those result that are in query 1 but not in query 2 Query 1 : index=APP_SERVER- source=API_LOG "Error while create record for customer id*" |rex "customer id : ... See more...
I have two query i want to get those result that are in query 1 but not in query 2 Query 1 : index=APP_SERVER- source=API_LOG "Error while create record for customer id*" |rex "customer id : (?<custId>.*\w+)" |dedup custId |table custId Output : 94ABGH0048 902SDKK557 902SGHT224 902SLWT720   Query 2 : index=APP_SERVER- source=API_LOGS  "Successfully created record for customer id*" |rex "customer id : (?<custId>.*\w+)" |dedup custId |table custId Output : 945TTFK0548 94ABGH0048 902SLWT720   I want below output out of both query ,it means these id are in query 1 result but not in query 2 result   902SDKK557 902SGHT224
Hello Splunk Community, Just starting out configuring Splunk and having an issue with my Time Stamps and line Breaks.  Currently Events in the log are using one time stamp seen below in the top left... See more...
Hello Splunk Community, Just starting out configuring Splunk and having an issue with my Time Stamps and line Breaks.  Currently Events in the log are using one time stamp seen below in the top left (red).  I want to separate all the events that have all their unique MXT time events (green).  I tried setting sourcetypes to Auto but also believe I need to fix my line breaks and not sure how/where to configure this. Any help is appreciated, thank you. Example: 10/22/20 3:45:04.000 AM ... 24 lines omitted ... BLANKUSER  10/16/20 03:10:13 MXT   CMND TSS ADD(XFERER) SUS BLANKUSER  10/16/20 03:10:13 MXT   CMND TSS ADD(XFDGWR) SUS BLANKUSER 10/16/20 07:00:07 MXT   CMND TSS CRE(DFETET) NAME('DOE, JOHN') TYPE(USER) DEPT(SA81195) PASS( ,60,EXP) PROFILE(PRO                                       ADTCS) BLANKUSER  10/16/20 07:00:08 MXT   CMND TSS ADD(EREFETE) DSNAME(DRERER.)
Hello, I'm working on customizing a Maps+ for Splunk visualization and I need help distinguishing different clustergroups by color. I've used the SPL command clusterGroup to label different cluster... See more...
Hello, I'm working on customizing a Maps+ for Splunk visualization and I need help distinguishing different clustergroups by color. I've used the SPL command clusterGroup to label different clusters but there's no way to tell which cluster belongs to which group. Any help with this would be appreciated, thank you.    Kind regards, Stephanie
I need to create an alert for when my DHCP server is not returning acknowledgement. Please help
We have the official iis app from splunkbase and i have been unable to get data from this location for a long time. i recently saw the below mentioned errors and updated the sourcetype name in the in... See more...
We have the official iis app from splunkbase and i have been unable to get data from this location for a long time. i recently saw the below mentioned errors and updated the sourcetype name in the inputs.conf to disable props.conf of default sourcetype.    ERROR TailReader - Ignoring path="C:\inetpub\logs\LogFiles\ABCDFEF\u_test_x.log" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions confs. ERROR WatchedFile - Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.   Kindly advise.
Hi, I am trying to order events of wireshark data i.e. events like time1  src, dst,src_port,dst_port  SYN    time2 src, dst,src_port,dst_port  FIN    time3  src, dst,src_port,dst_port  SYN    t... See more...
Hi, I am trying to order events of wireshark data i.e. events like time1  src, dst,src_port,dst_port  SYN    time2 src, dst,src_port,dst_port  FIN    time3  src, dst,src_port,dst_port  SYN    time4 src, dst,src_port,dst_port  FIN    but transaction creates a transaction  out of the time2 and time  event. which do not belong together despite using startwith and endwith. I used the following  eval sdspdp=src+":"+dst+":"+src_port+":"+dst_port | transaction sdspdp startswith="SYN" endswith="FIN" keepevicted=true unifyends=true | where closed_txn=1 | table time packet_info closed_txn And I would expect the SYN before the HELLO and FIN, but it isn't.  Why ?  time packet_info closed_txn         2020-10-22 18:51:42.772567 2020-10-22 18:51:42.791144 2020-10-22 18:51:42.933482 50020 → 8089 [FIN, ACK] Seq=930 Ack=119936 Win=81408 Len=0 TSval=2422219150 TSecr=1505344044 50020 → 8089 [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=2422219110 TSecr=0 WS=128 Client Hello 1 2020-10-22 18:51:42.464082 2020-10-22 18:51:42.492535 2020-10-22 18:51:42.602374 54728 → http-alt(8080) [FIN, ACK] Seq=896 Ack=32983 Win=42368 Len=0 TSval=2422219067 TSecr=2891895242 54728 → http-alt(8080) [SYN] Seq=0 Win=14600 Len=0 MSS=1460 SACK_PERM=1 TSval=2422219032 TSecr=0 WS=128 Client Hello 1     Thank you Markus
I am trying to get response time for each of my services. However I end up using a wrong method to do this.      track=prod app=servicename| stats by TrackingId      How do i get just the avg r... See more...
I am trying to get response time for each of my services. However I end up using a wrong method to do this.      track=prod app=servicename| stats by TrackingId      How do i get just the avg response time based on TrackingId if say 2 events have the same TrackingId ?   2020-10-22 13:38:40.912 callerId=IndexConsumer, TrackingId=0db9610bb68a4097bee41ae95421936b app=servicename responseCode=500;elapsed=4968;responseLen=170] 2020-10-22 13:38:40.909 callerId=IndexConsumer, TrackingId=0db9610bb68a4097bee41ae95421936b app=servicename loggerclass=org.hibernate.engine.jdbc.spi.SqlExceptionHelper loggerline=142 loggermethod=logExceptions() loggerlevel=ERROR   I want to use the singe number displayed chart to display this avg time duration. How do i do this ?
I've read all the compatibility matrix docs, but I'm not sure how my situation fits into it. Specifically compatibility when sending data through intermediate Heavy Forwarders. Here's my current env... See more...
I've read all the compatibility matrix docs, but I'm not sure how my situation fits into it. Specifically compatibility when sending data through intermediate Heavy Forwarders. Here's my current environment, and everything is working fine: UF's (6.3.x - 7.x) ---> Intermediate HF's (7.3.6) ---> Indexer cluster (7.3.6) I need to point my HF's at newly built 8.x indexers (not upgrading existing indexers - these are new indexers at a new location). Will I have a problem?  I know that 6.x UFs cant send to 8.x indexers, but am I getting around the problem with a 7.x Intermediate HF? And yes, ideally I would like all UFs to be upgraded, but this situation is temporary. Thanks!