All Topics

Top

All Topics

  INDEX Name generated (10 million new records every day) INDEX Fields username, secret, key  Lookup file secrets.csv with fields secret (128 bit string - 1 million static records) I am creating ... See more...
  INDEX Name generated (10 million new records every day) INDEX Fields username, secret, key  Lookup file secrets.csv with fields secret (128 bit string - 1 million static records) I am creating a report to check if any of secret is found within the secrets.csv list and flag it. index=generated [| inputlookup secrets.csv | fields secret] | table username, secret, key How does the check for secret if exists in both index generated is validated in inputlookup in the search string?
I have two indexes Index accounts: [user. payroll] Index employees: [user, emp_details, emp_information]   I am trying to use a search to search all the 1 million users in index users to search f... See more...
I have two indexes Index accounts: [user. payroll] Index employees: [user, emp_details, emp_information]   I am trying to use a search to search all the 1 million users in index users to search for the corresponding details of the same user in different index which contains 20 million records. I tried something like index=accounts user=* | join type=left user [search index=employees | fields user,  emp_details,  emp_information] | table user, emp_details, emp_information   But its not search all the users and joining all the users.   
I am migrating my user account to central identity using an existing email, but I can’t tell how I should proceed to finish This article provides context and instructions for completing the user ac... See more...
I am migrating my user account to central identity using an existing email, but I can’t tell how I should proceed to finish This article provides context and instructions for completing the user account migration to the AppDynamics central identity. Why is this happening? | What must I do? I can't remember my password | Step by Step Instructions | Resources Figure 1 - Migration Experience page When a user signs in successfully to a Controller account using their local username, completes the migration experience page (see Figure 1, left) by providing their email address and is presented a new login screen, they are often confused as to what to do next. Is this happening to you?  There is an easy answer. Simply sign into the new login screen using the same email address you provided above as the username. Use the password you use with AppDynamics for this email at the password prompt. NOTE | This may or may not be the same password you use to log in to your Controller. You may have set this some time ago! Feel free to use Reset Password.   Once you complete this sign-in, the username for your Controller local user account will be migrated to be your email address user account, and you will be directed to the Controller account in a logged-in state.    Why is this happening? Figure 2 – Login screen This migration process is about making sure each human user can access all AppDynamics resources with the same user identity. It will unify potentially multiple user accounts into a single user account based on the email address provided.   When you see the login screen (see Figure 2, left) after providing your email address (see Figure 1, above), it means that AppDynamics already has a user account created with that email address as the username, and we need you to prove that you own it.   Back to TOC How did I get this account?  There are several ways you could have obtained it:  You have completed training and used this account to access training   You can file support tickets and use this account for that action  You already completed migration on another Controller account.   You signed up for a self-service trial using this email address.  Your company admin created an account for you using this email address some time ago and you’ve never even used it. You may not recall, but we remember the account and need you to prove you own it.   Back to TOC What must I do? You simply enter your email address as the username, provide the password, and sign in.  What if I don't remember my password? Easy. Simply click the Reset Password link, and an email will be sent to you. Follow the instructions there and create a new password. With this new password, return to the flow and try again.  What is AppD doing to improve this experience?  We are aware of the problem and will update the experience in the coming days to include instructions to help ensure the next steps are clear.   Back to TOC I'm still confused. Can you provide an example and go through this process step-by-step? Sure. Let’s set this example up:  Your controller account: helpmeout  Your local username: legacyuser  Your email address: userone@helpmeout.com  Now for the steps: Navigate to AppDynamics using helpmeout.saas.appdynamics.com   (NOTE | This is not a real account and is just provided for this example scenario) Enter the account name of: helpmeout Enter your username of: legacyuser and click Next  Enter the password for legacyuser and click Sign in    You are presented with the migration dialog (see Figure 1, above) and are asked to enter your email. The dialog shows the email address we have on file for you and an empty confirmation email field. You may change the email address as displayed or leave it, but you must confirm it in the confirmation field.   You complete the email field as: userone@helpmeout.com and the confirmation field as userone@helpmeout.com and click Confirm    The system finds that userone@helpmeout.com already exists in the AppDynamics Identity Provider and prompts you with a login screen (see Figure 2, above) to authenticate that it’s really you. We want to make sure that you are who you say you are for security reasons.  You enter your email address as: userone@helpmeout.com and click Next. NOTE | We recommend that you check the Remember me box for future convenience.    You are presented with a password field. Enter the password for the userone@helpmeout.com user and click the Sign in button. If you don’t remember your password, click Reset password and follow the instructions to set a new password.    You will be authenticated and then will receive a message indicating that you have completed migration. It will remind you that, from now on, you will log in to the helpmeout account using the username of userone@helpmeout.com and the password you just used to authenticate. You can then click the link to the helpmeout.saas.appdynamics.com account, where you will be automatically logged in.     Congrats! You’ve completed migration and will be back to using the system as normal with the added benefit of seamless access (SSO) to other accounts that use this username as well as resources of AppDynamics like Community, University, and Support.  Back to TOC Additional resources AppDynamics Global Identity Migration Experience - FAQ   Why sign in after migration? 
Any ideas on how to pull a random sample for the logging application that spans the full month and does not specify sources or source types? We’re trying to make this generic enough that it can be ap... See more...
Any ideas on how to pull a random sample for the logging application that spans the full month and does not specify sources or source types? We’re trying to make this generic enough that it can be applied to any system that starts logging to scan samples of whatever raw data they’ve logged. The query that has been used historically is only pulling the first 25 of the last time items were logged: index=co_lob co_id=app1 co_env=prod | head 25 | stats latest(_time) as latestinput, latest(source) as source, latest(_raw) as latestraw, count by host, index, co_id, sourcetype, co_env | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(latestinput) AS latestinput | eval application="app1" | table application, count, host, index, latestinput, latestraw, source, sourcetype, co_id, co_env   I found the information on random() and tried: index=co_lob co_id=app1 co_env=prod | eval rand=random() % 50 | head 50   and was going to go from there to extract into the right table format for the scanning, but even just running for the week to date it times out. Trying to get a random 50 or 100 from across an entire month. Using the Event Sampling doesn’t work because even if I go 1 : 100,000,000, for some of these applications that are logging millions of transactions an hour, it’s causing performance issues and is too much for review.    Thank you in advance for any guidance
My coldtofrozen has stopped working. Might be related to python3, but I'm not 100% sure. I've done some tweaking to the coldtofrozen.py #! /opt/splunk/bin python and I've checked other settings, but ... See more...
My coldtofrozen has stopped working. Might be related to python3, but I'm not 100% sure. I've done some tweaking to the coldtofrozen.py #! /opt/splunk/bin python and I've checked other settings, but all seem to be okay. Are there any commands or tools I can run to help troubleshoot? Where would the errors be logged? Thanks
as a result of an inputlookup, I have the following table 1(a dish that a chef can prepare, and the chef's name): dish chef a gordon ramsay b gordon ramsay c Guy Fie... See more...
as a result of an inputlookup, I have the following table 1(a dish that a chef can prepare, and the chef's name): dish chef a gordon ramsay b gordon ramsay c Guy Fieri d Guy Fieri f Jamie Oliver g gordon ramsay h gordon ramsay Rachael Ray   and I have the following table from another outputlookup (the restaurant where a chef works, and the chef name): restaurant chef 1 gordon ramsay 2 Guy Fieri 3 Guy Fieri 4 Jaime Oliver 5 Michael Caines I want to combine the 2 tables into this: restaurant dish chef 1 a gordon ramsay 1 b gordon ramsay 2 c Guy Fieri 2 d Guy Fieri 3 c Guy Fieri 3 d Guy Fieri 4 f Jamie Oliver 1 g gordon ramsay 1 h gordon ramsay 5 null Michael Caines null h Rachael Ray Basically, based on tables 1 & 2, how do I get a table telling me the restaurant where a chef works, the dishes that he/she would prepare, and the chef's name? In stuff I've tried,  I'm able to combine table 1 & 2 with the join command, but a lot of results end up getting filtered out. (eg. I might end up with one result per chef but not getting all the dishes, or one result per dish but not getting all the restaurants).  
I simply need to timechart the numeric values from field that is being returned.  For example index=proxy | timechart count by resp_time.  getting something like this: I need one line that ch... See more...
I simply need to timechart the numeric values from field that is being returned.  For example index=proxy | timechart count by resp_time.  getting something like this: I need one line that charts all the values... instead it splits them up by how many times it has seen each value.
Hello, I have an issue with web and syslog indexes not being logged properly. I believe that I will need to change the settings of the Spunk Forwarders and I need help with modifying the UF configs s... See more...
Hello, I have an issue with web and syslog indexes not being logged properly. I believe that I will need to change the settings of the Spunk Forwarders and I need help with modifying the UF configs so that I can correct the data that needs to be logged. We have a deployment server set up and I think this is probably the route to go. What does the process look like for doing this?
Hello I am beginner with Splunk. I made a query and my search result is like      text1 text2 text3 response: { "status":"UP", "object1":{ "field1":"name1", "status":"UP" ... See more...
Hello I am beginner with Splunk. I made a query and my search result is like      text1 text2 text3 response: { "status":"UP", "object1":{ "field1":"name1", "status":"UP" }, "object2":{ "field2":"name2", "status":"UP" }, "object3":{ "object4":{ "field4":"name4", "status":"UP" }, "object5":{ "field5":"name5", "status":"UP" }, "status":"UP" }, "object6":{ "field6":"name6", "status":"UP" } }     I want to obtain the value for object3.status for a column of table. How to do this ? With rex field=_raw or spath ? Thank you in advance.
Hi.  I am new to splunk and testing it in lab right now, seeing if it will work for us.   Some of the docs are a little confusing, so want to make sure I am understanding things right What I need ... See more...
Hi.  I am new to splunk and testing it in lab right now, seeing if it will work for us.   Some of the docs are a little confusing, so want to make sure I am understanding things right What I need to monitor is Events from all servers, AD Changes/Lockouts.  Maybe Fortinet Logs as well.  Hoping to use splunk for AD Monitoring, and stop paying Netwrix.   Trying to determine how much data would be needed for ingestion.  Is ingestion for only importing of data, or does is it also used in processing that Splunk does? I currently have the Windows add-on installed in my instance. My first part of testing is this. 1. Get logs from servers, do I still need to configure the inputs file in the default directory, enabling ADMON? 2. AD Change monitoring, Same as above, do I need to enable it there, or do I just set it up in the AD tab under settings->inputs?   TIA for your help! -Will  
IN THE BLOG | Cisco AppDynamics GovAPM gives government organizations access to innovative cloud strategies without trading off compliance and great application performance.  Read more here: FedRAM... See more...
IN THE BLOG | Cisco AppDynamics GovAPM gives government organizations access to innovative cloud strategies without trading off compliance and great application performance.  Read more here: FedRAMP authorized in all-in-one visibility   What's in the Blog post?  Besides compliance and application performance, Cisco AppDynamics GovAPM enables efficient application management for government agencies, while driving cloud adoption and maintaining high security standards. It enables federal and state government agencies to better understand user experiences, improve performance and better connect citizens with critical government resources. Cisco AppDynamics GovAPM is fully FedRAMP (Moderate) authorized. The blog covers: Network visibility Infrastructure visibility End User Monitoring  Browser Real User Monitoring Cluster monitoring   About Kara McMillan Kara McMillan is the Senior Product Manager for GovAPM, the Cisco AppDynamics FedRAMP (Moderate) offering, and she is also the PM for Compliance for all Cisco offerings. With over 20 years in the software industry, Kara has extensive experience in Storage Management and Operations Management solutions. Additional resources  Here's our GovAPM product page
I have a requirement to process and correlate the data as soon as things come in. The data has some triggering events, which can be identified and used. Is that possible in splunk to run something ... See more...
I have a requirement to process and correlate the data as soon as things come in. The data has some triggering events, which can be identified and used. Is that possible in splunk to run something based on the incoming data.
Hi All,  I am new to Splunk and joined this community seeking help. Request you to please help me getting my doubts clear. My Question is 1. When my Splunk is down for an hour, and if i get any ad... See more...
Hi All,  I am new to Splunk and joined this community seeking help. Request you to please help me getting my doubts clear. My Question is 1. When my Splunk is down for an hour, and if i get any adhoc request to get the data for that hour period, so once Splunk is up then what we need to do  (restart splunk forwarder?) to restore data or data will be restored by itself  or data will be lost. 2. What to do/where to check at instance level when i am unable to see latest log files/data in splunk 3. What to do if log files are missing in splunk forwarder after patching,  how to add files or what is the correct approach    
I have a series of panels in a dashboard that drill down to the next panel.  I discovered that the data I want to drilldown on the populates in different sections of the event.  I used the field extr... See more...
I have a series of panels in a dashboard that drill down to the next panel.  I discovered that the data I want to drilldown on the populates in different sections of the event.  I used the field extraction tool in splunk to create two fields.  I then used the eval and coalesce to create one field.   index=”someIndex” sourcetype="FooSource" | rename Field1 as Foo1 Field2 as Foo2 | eval TotalFoo = coalesce(foo1,foo2)  | chart dc(field3) by "TotalFoo" Field4 For the panel I want to populate based on the TotalFoo field won't work. I believe this is due to the sub-search runs before the main search, so the TotalFoo field does not exist.   index=”someIndex” sourcetype="FooSource" | rename Field1 as Foo1 Field2 as Foo2 | eval TotalFoo = coalesce(foo1,foo2) | search TotalFoo="$onClick$" I'm wondering how to get around this limitation or if that is possibel?  
Hello,  question about AWS Systems Manager | Splunkbase  is there any nice way to teach Splunk (and SOAR) to trigger incidents, defined in AWS SSM Incident Manager? root cause - it allows to c... See more...
Hello,  question about AWS Systems Manager | Splunkbase  is there any nice way to teach Splunk (and SOAR) to trigger incidents, defined in AWS SSM Incident Manager? root cause - it allows to call.  and in our case this is almost only one option to make splunk informative and react on alerts out of business hours. 
Hey all,  I'm attempting to create a query that will compare a specified time frame to that same time frame from each of the four weeks prior with a line graph. Thanks in advance for any help!
I am trying to pass multiple values using a dropdown input. How can I add multiple values with each choice in a dropdown input? As the user clicks any choice, all the values associated with the choic... See more...
I am trying to pass multiple values using a dropdown input. How can I add multiple values with each choice in a dropdown input? As the user clicks any choice, all the values associated with the choice should be passed and populate a panel based on the values.     <form> <label>Demo</label> <fieldset submitButton="false"> <input type="dropdown" token="Variety_token" searchWhenChanged="true"> <label>Fruit List</label> <choice value="111,222,333,444">Mango</choice> <choice value="123,456,112">Apple</choice> <choice value="555,666,777,888,999">Banana</choice> <choice value="753,482">Grapes</choice> </input> </fieldset> <row> <panel> <table> <title>Fruit List/title> <search> <query>index=* sourcetype=source Fruitid=$Variety_token$ | stats count by Fruitname, Fruitvariety, Fruitid......</query> <earliest>-1y@y</earliest> <latest>@y</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>      
I'm looking specifically at the index for _configtracker to audit changes to serverclass.conf file.  Because the nature of the <filtertype>.n = <value> the behavior is one action to remove all values... See more...
I'm looking specifically at the index for _configtracker to audit changes to serverclass.conf file.  Because the nature of the <filtertype>.n = <value> the behavior is one action to remove all values, then a second action to rewrite all the values in lexi order.  This is making auditing add/removals/static very difficult. I have managed to transact the events so I can compare old values to new values.  I struggle with how to compare the results to identify changes when values list is very long. Current Table Output Unique Ident OldValues NewValues <transact-x> A B C D A C D E   What I'm looking for Unique Ident OldValues NewValue Audit <transact-x> A A NoChange <transact-x> B   Removed <transact-x> C C NoChange <transact-x> D D NoChange <transact-x>   E Added   Assumptions 1) stats values(field): I don't believe any of my samples cross over 10,000 which I believe is default limits for values field 2) values function will lexi order all values regardless of original order in raw data feed
I have the following query that gives the count for port and CPU percent.  index=abc source=xyz SMFID=EDCD SMF119HDSubType=2 | timechart span=60m count by SMF119AP_TTLPort_0001 usenull=f useother=... See more...
I have the following query that gives the count for port and CPU percent.  index=abc source=xyz SMFID=EDCD SMF119HDSubType=2 | timechart span=60m count by SMF119AP_TTLPort_0001 usenull=f useother=f | stats values(*) as * by _time | untable _time Port Count | where Count > 4000 | eval DATE = strftime(_time,"%m/%d/%y %H:%M:%S.%2N") | eval Date = substr(DATE,1,9) | eval Hours = substr(DATE, 11,18) | appendcols [search index=abc source=xyz (SYSNAME=EDCD) ((date_wday=tuesday AND date_hour=*) OR (date_wday=wednesday AND date_hour=*) OR (date_wday=thursday AND date_hour=*) OR (date_wday=friday AND date_hour=*) OR (date_wday=monday AND date_hour=10) OR (date_wday=monday AND date_hour=11) OR (date_wday=monday AND date_hour=12) OR (date_wday=monday AND date_hour=13) OR (date_wday=monday AND date_hour=14) OR (date_wday=monday AND date_hour=15) OR (date_wday=monday AND date_hour=16) OR (date_wday=monday AND date_hour=17) OR (date_wday=monday AND date_hour=18) OR (date_wday=monday AND date_hour=19) OR (date_wday=monday AND date_hour=20) OR (date_wday=monday AND date_hour=21) OR (date_wday=monday AND date_hour=22) OR (date_wday=monday AND date_hour=23) OR (date_wday=saturday AND date_hour=0) OR (date_wday=saturday AND date_hour=1) OR (date_wday=saturday AND date_hour=2) OR (date_wday=saturday AND date_hour=3) OR (date_wday=saturday AND date_hour=4) OR (date_wday=saturday AND date_hour=5) OR (date_wday=saturday AND date_hour=6) OR (date_wday=saturday AND date_hour=7)) | bin span=1h@h _time | eval "Hours"=strftime('_time',"%H:%M:%S.%2N") | eval DATE = strftime('_time',"%m/%d/%y %H:%M:%S.%2N") | eval Date = substr(DATE, 1,9) | eval CPU = round(RCVCPUA/16,2) | stats avg(CPU) as "CPU" by Hours Date | eval CPU=round(CPU,2) ] | table Date Hours Port Count CPU This generates the following result. I want to set an alert only when the count is >5000 and CPU >80. What combined statement can be used to get the desired result?   Date Hours Port Count CPU 08/22/23 7:00:00.00 23050 75787 38.42 08/22/23 8:00:00.00 23050 19854 84.56 08/22/23 9:00:00.00 23008 4126 37.16 08/22/23 9:00:00.00 23050 20121 35.71
Hi, is it possible to search a field value and then count it for example first today and then add the count of the same from the week before ?  I checked this example: https://community.splunk.... See more...
Hi, is it possible to search a field value and then count it for example first today and then add the count of the same from the week before ?  I checked this example: https://community.splunk.com/t5/Splunk-Search/search-a-value-in-previous-time-period-and-add-to-current-count/m-p/566121 and did a query like this   index=my_summary source="my_source" earliest=-1w@w | bucket span=1w _time | where Total_Requests > 10 AND Total_New_Services > 15 | stats values(info_min_time) as earliest values(info_max_time) as latest values(user) as user, values(Total_Requests) as Total_Requests, values(Service_Name) as Service_Name, values(Total_New_Services) as Total_New_Services by Account_Name _time | convert ctime(earliest) ctime(latest) | eventstats sum(Total_Requests) as Total_Requests_last7days sum(Total_New_Services) as Total_New_Services_last7days by Account_Name   only issue I see with my query is the _time values are different and the earliest & latest time values are different (its a summary index btw) but the Total_Requests, Total_Requests_last7days, Total_New_Services, Total_New_Services_last7days are as expected Any help would be appreciated, thank you!