All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I need to search one index, extract a value from a field from that search, then use that value when searching a different index. I then need to join values from both searches into my output. Im ... See more...
Hi  I need to search one index, extract a value from a field from that search, then use that value when searching a different index. I then need to join values from both searches into my output. Im guessing i need to use a subsearch or append of some kind? First search is index=cisco_fw | stats values(action), values(user).  Second search is index=web_proxy | stats values(url). This index also contains a field named user. I want to use the value of user from the first search and search the user field in the second search with that value. In other words,  the user field value from both search needs to match. I then need to combine the output i.e values(action) values(url) and values(user) to display all three values on one line. Thanks. 
I need to get the count of the total number of events in the search and use it later to calculate the value of another field.  I am trying to use the eventstats command to do it.  When i use the eve... See more...
I need to get the count of the total number of events in the search and use it later to calculate the value of another field.  I am trying to use the eventstats command to do it.  When i use the eventstats to create a new total field and later to compute the errorPercentage , it is always null.  This is the sample query i am using:  <search> | eventstats count as total | eval Message_new = 'Message.msg' | rex mode=sed field=Message_new "s/^\"//g" | eval new_msg=case(like(Message_new,"%xyz%"),"abc",like(Message_new,"%qqq%"),"aaaa",1==1,"unknown") | eventstats max(_time) as maxtime, min(_time) as mintime | eval midpoint=(maxtime + mintime) / 2 | eval Period=if(_time > midpoint,"interval_2","interval_1") | eventstats count(new_msg) as errorcount by Period,new_msg | xyseries new_msg Period errorcount | eval percentSpike = round((( interval_2 - interval_1 ) / interval_1 ) * 100) | eval errorPercent = round(((interval_2 + interval_1) /total) * 100) | table new_msg interval_1 interval_2 percentSpike errorPercent
This article provides an example of how to use the Splunk Phantom REST API to create multiple assets. This may be useful if you have a large number of similar asset types, such as firewalls, Windows ... See more...
This article provides an example of how to use the Splunk Phantom REST API to create multiple assets. This may be useful if you have a large number of similar asset types, such as firewalls, Windows servers, or vSphere servers. See Using the REST API reference for Splunk Phantom for more information. To create as asset in Splunk Phantom using the REST API, post a JSON object to a specific URL on the Splunk Phantom server.  TO see the contents of the JSON, you can manually create an example asset, then export the JSON using the API. The following script (tested with Python 2.7) exports all of your assets to JSON files in the current directory. Replace the host, username, and password with the actual values from your own environment: """ export_assets.py This script will go through the list of Assets in a Phantom server and create a .json file for each in the current directory """ import requests, json host = '10.16.0.201' #IP address or hostname of the Phantom server username = 'admin' password = 'password' verifycert = False #Default to not checking the validity of the SSL cert, since Phantom defaults to self-signed maxpages = 100 #Assume there are no more than 100 pages of results, to protect against an accidental loop baseurl='https://' + host + '/rest/asset?page=' #The base URL for a list of all assets, by page number currentpage = 0 if not verifycert: requests.packages.urllib3.disable_warnings() #If disabling SSL cert check, also disable the warnings assetids = [] while True: currenturl = baseurl + str (currentpage) r = requests.get(currenturl, auth=(username, password), verify = verifycert) results = r.json() pages = int (results['num_pages']) #Get the total number of pages, so we know how many to read currentpage = currentpage + 1 for asset in results['data']: assetids.append (asset['id']) #Save the asset ID for each asset, so we can fetch each one if (currentpage >= pages) or (currentpage >= maxpages): #If we have read all the pages or hit the limit, stop break baseurl='https://' + host + '/rest/asset/' #The base URL for an individual asset, by Asset ID for assetid in assetids: currenturl = baseurl + assetid #Append the Asset ID to the base URL r = requests.get(currenturl, auth=(username, password), verify = verifycert) results = r.json() filename = results['name'] + '.json' #Open a file for writing with the asset name for the filename and .json for an extension file = open(filename, 'w') file.write (json.dumps(results, indent=2, sort_keys=True)) #Write out the JSON in mostly-human-readable format Below is an example of the JSON file produced by the script: { "configuration": { "ingest": {}, "password": "vN9ExjcSq1GhkTzOmwwDTA==", "server": "10.16.0.151", "username": "root" }, "description": "", "disabled": false, "id": "3a15ae6f-1013-4143-82c1-a58c30a27bbb", "name": "labesxi1", "primary_owner": { "user_ids": [], "voting": 0 }, "product_name": "vSphere", "product_vendor": "VMware", "product_version": "", "secondary_owner": { "user_ids": [], "voting": 0 }, "tags": [ "" ], "token": null, "type": "virtualization", "version": 1 } You can edit the JSON file and change the fields as needed, such as the name, server, and password.  After making the desired changes, use the following script to post the JSON file to the correct URL: """ create-asset.py This script takes a JSON file representing a Phantom Asset, and creates the Asset on the server """ import requests, json, sys host = '10.16.0.201' #IP address or hostname of the Phantom server username = 'admin' password = 'password' verifycert = False #Default to not checking the validity of the SSL cert, since Phantom defaults to self-signed baseurl='https://' + host + '/rest/asset' #The base URL for posting to create an Asset if len(sys.argv) < 2: print 'Usage: ' + sys.argv[0] + ' [filename]' sys.exit (1) assetfilename = sys.argv[1] with open(assetfilename) as assetfile: assetjson = json.load(assetfile) if not verifycert: requests.packages.urllib3.disable_warnings() #If disabling SSL cert check, also disable the warnings #Post the JSON to the URL r = requests.post(baseurl, data = json.dumps(assetjson), auth=(username, password), verify = verifycert) print json.dumps(json.loads(r.text), indent=2, sort_keys=True) #Print the results from the server When you export an asset using the API, you get all the fields saved for that asset, including some internal fields that you don't need to specify when you create an asset. Field Description id Each asset is assigned an ID when the asset is created. If you use the JSON file to create a new asset, the existing ID is ignored. password If you use the JSON file to create a new asset, the existing encrypted password causes the creation of the new asset to fail authentication. You must replace the encrypted password with a clear text password.   The minimum number of fields required to create an asset depends on the asset type. Some assets may only require a name, vendor, and product, while others may also require several additional fields. The following example shows the minimum field in the JSON for a vSphere asset: { "configuration": { "server": "10.16.0.151", "username": "root", "password": "password" }, "name": "labesxi1", "product_name": "vSphere", "product_vendor": "VMware" } You can edit this file to change the name and IP address, the post it to create a new asset. However, this requires you to edit and post the file once for each asset. An alternative for creating a large number of similar assets is to use a CSV file. For example: "name","product_name","product_vendor","configuration:server","configuration:username","configuration:password" "labesxi1","vSphere","VMware","10.16.0.151","root","password" "labesxi2","vSphere","VMware","10.16.0.152","root","password" "labesxi3","vSphere","VMware","10.16.0.153","root","password" "labesxi4","vSphere","VMware","10.16.0.154","root","password" "labesxi5","vSphere","VMware","10.16.0.155","root","password" "labesxi6","vSphere","VMware","10.16.0.156","root","password" "labesxi7","vSphere","VMware","10.16.0.157","root","password" "labesxi8","vSphere","VMware","10.16.0.158","root","password" "labesxi9","vSphere","VMware","10.16.0.159","root","password" "labesxi10","vSphere","VMware","10.16.0.160","root","password" We can write a script that will create all of these assets at once. The column names in the first line are very important for the script, as they represent the JSON variable names.  The name, product_name, and product_vendor parameters are common to every asset. The server, username, and password parameters are specific to the asset type and are in a sub-list named configuration.  Hence, the server is referred to as configuration:server. The following script reads the CSV and creates the assets: """ create_assets_from_csv.py This script takes a CSV file containing parameters needed to create multiple Phantom Assets """ import requests, json, sys, csv host = '10.16.0.201' #IP address or hostname of the Phantom server username = 'admin' password = 'password' verifycert = False #Default to not checking the validity of the SSL cert, since Phantom defaults to self-signed baseurl='https://' + host + '/rest/asset' #The base URL for posting to create an Asset if len(sys.argv) < 2: print 'Usage: ' + sys.argv[0] + ' [filename]' sys.exit (1) csvfilename = sys.argv[1] header = [] rows=[] with open(csvfilename) as csvfile: reader = csv.reader(csvfile) first_row = True for row in reader: if first_row: header = row #save the header row #got header first_row = False continue if not row: continue rows.append(row) #save the data rows into a list if not verifycert: requests.packages.urllib3.disable_warnings() #If disabling SSL cert check, also disable the warnings for row in rows: #loop through the data rows asset = {} asset['configuration'] = {} for i, column in enumerate(header): #loop again for each header column print column + " = " + row[i] if column.startswith('configuration:'): #if the column name starts with configuration:, stick it under the configuration list subcolumn = column.split('configuration:',1)[1] asset['configuration'][subcolumn] = row[i] else: asset[column] = row[i] #otherwise add the variable to the top level of the JSON print json.dumps(asset, indent=2, sort_keys=True) #pretty-print the JSON we made #post the JSON r = requests.post(baseurl, data = json.dumps(asset), auth = (username, password), verify = verifycert) #pretty-print the JSON result from the server print json.dumps(json.loads(r.text), indent=2, sort_keys=True) print For a different asset type, create a separate CSV with the correct header line for that asset type, and one row for each asset.
Do you have a new and valid link for that procedure? http://docs.splunk.com/Documentation/Storm/Storm/User/Howtosetupsyslog   The above comes from : https://community.splunk.com/t5/Getting-Data-I... See more...
Do you have a new and valid link for that procedure? http://docs.splunk.com/Documentation/Storm/Storm/User/Howtosetupsyslog   The above comes from : https://community.splunk.com/t5/Getting-Data-In/How-to-get-data-from-the-AIX-errpt-into-Splunk/m-p/126910    
Hello Everyone,   I have been working on a problem for the last few weeks and haven't had huge amounts of success and was hoping someone here could point me in the right direction. I have two data... See more...
Hello Everyone,   I have been working on a problem for the last few weeks and haven't had huge amounts of success and was hoping someone here could point me in the right direction. I have two data sources. In datasource A I have multiple records per host/asset - hundreds to thousands. In datasource B I have one record per host/asset. I need to take a field from the record in datasource B (tags in this case) and append it to every record in datasource A based on a unique key (asset_uuid in this scenario). With the goal being to do various calculations, searches and aggregations on the hundreds/thousands of events based on the tag field values. I was first looking at transaction, but that was merging all ~500 records for each asset in datasource A which is not what I need. I then started looking at a join command which I had mostly working I think but from what I can tell the subsearch on the join has a limit where it will only affect 500000 events as far as my research tells me. In other talks I have heard some people mention appendcols which, if I am reading the documentation for it correctly, won't do this either as it is more of a 1-to-1 than a 1-to-many. My next route is to see if stats or maybe a calculated field might be able to do this? I was hoping that those more experienced might be able point me where to start looking to get this to work while I am researching this. It seems like something that should be super easy but I, and those I have spoken to, haven't found a path yet. Thanks everyone for any advice you may have to give.  
When I create an alert that send a .csv file via email. The .csv file only contains 8,000 with an error saying "only the first 8,000 of 20,000 results are included in the attached .csv. Please advise
Hi, I am creating a report with "chart field1 field2", field2 only has 2 values. So the result has 3 columns: Field1, Field2Value1, Field2Value2. And I'd like to use addtotals and addcoltotals for ea... See more...
Hi, I am creating a report with "chart field1 field2", field2 only has 2 values. So the result has 3 columns: Field1, Field2Value1, Field2Value2. And I'd like to use addtotals and addcoltotals for each row and each column. The issue I am having now, is that value of field1 are numbers so they are counted by both addtotals and addcoltotals, which I don't want. I believe the easiest way is to convert the type of Field1 to string, with certain format, like  number "12345"  to string "12345", number "123" to "00123". I have tried tostring. But they are still counted.  
Hi guys! I'm a newbie to Splunk and I would appreciate if you could help me out on this one (Thank you to all the members of the community in advance) A payment transaction has to be monitored in... See more...
Hi guys! I'm a newbie to Splunk and I would appreciate if you could help me out on this one (Thank you to all the members of the community in advance) A payment transaction has to be monitored in order to determine the clients that are able to successfully made a payment of a purchase in a web page. This payment process has 3 easy stages (A, B and C), and each stage will return a "clientcheck_code" and an "id payment" for each client.  When I visualize the data it looks like this: Stage A client clientcheck_code id_payment 007 S_50 USJkn 008 S_50 t6yudd 008 S_50 68sgh 006 S_50 8lpifd   Stage B client clientcheck_code id_payment 007 S_50 89Jkn 008 S_50 896gyudd 008 S_59 00smoh 006 S_50 eybry   Stage C client clientcheck_code id_payment 007 S_50 ijfcvgh 008 S_50 t6yuhf 006 S_50 okyrgdf   If any client gets the check_code S_59 at any stage the transaction is immediacy called by the back, thus if a client gets this code in stage two then they would not have an record for the next stage (3) as the platform will log them out. But our main goal is to be able to visualize ALL possible combinations of check_code for each  attempt made by each client as we believe that our web server is kind of faulty at the moment.  We want to be able to see something like this: client A_clientcheck_code B_clientcheck_code C_clientcheck_code FAILED? 007 S_50 S_50 S_50 NO 008 S_50 S_50 S_50 NO 008 S_50 S_59 NULL YES 006 S_50 S_50 S_50 NO   this table allows us to see each attempt by each client and what code they received in every stage, notice that the client 008 did not have a code for stage C because this client got a code S_59 from the previous stage So we also want to be able to do that. An idea to achieve the above table is to use the stats command and then list the events by client but when we use the list() command sometimes we get very weird outcome from splunk and unfortunately the filed "id_payment" is not the same through a transaction and so it can not be use as a "common identifier" to later on uses the stats command, So we thought of this solution: What if we could overwrite the field  id_payment for stages B and C with the same id_payment of the stage A, that way we could have the same id_payment for every stage and use the stats command to count by id_payment which now will be a  "common identifier" for every attempt a client makes, to illustrate the table should look like this: client A_clientcheck_code B_clientcheck_code C_clientcheck_code FAILED? id_payment 007 S_50 S_50 S_50 NO USJkn 008 S_50 S_50 S_50 NO t6yudd 008 S_50 S_59 NULL YES 68sgh 006 S_50 S_50 S_50 NO 8lpifd   This way every record is now stored by a unique id_payment that is a common identifier to a single transaction. I will be so thankful if you guys can help me out with that! or give me some sort of documentation to go around it. Thank you so much for your time and kindness Queries:     Stage A index="aws_pay_001_loop_page" |search tx_loan_where="A" | fields client id_payment clientcheck_code Stage B index="aws_pay_001_loop_page" |search tx_loan_where="B" | fields client id_payment clientcheck_code Stage C index="aws_pay_001_loop_page" |search tx_loan_where="C" | fields client id_payment clientcheck_code       kindly,   Cindy
Hey!   I am using Splunk Machine Learning Dashboard App, Scatter Line Chart. But i am getting this error - "These results may be downsampled/truncated. This visualization is configured to display a... See more...
Hey!   I am using Splunk Machine Learning Dashboard App, Scatter Line Chart. But i am getting this error - "These results may be downsampled/truncated. This visualization is configured to display a maximum of 50 series, and 1000 results per series, and that limit has been reached." How do I allow all results? Or not downsample? Since it is machine learning, I cannot using charting command. Thank you
Generate a alert when the Status field change from faliures to success..So we want the first success responsecode after failure
Hi, I am using the latest version of splunk addon for microsoft cloud services 4.1.2, and splunk 8.1.3. I have followed the guides for setting up azure event hub, application and eventhub in the ad... See more...
Hi, I am using the latest version of splunk addon for microsoft cloud services 4.1.2, and splunk 8.1.3. I have followed the guides for setting up azure event hub, application and eventhub in the addon. but I do not get any data in what so ever. Unlike previous versions the eventhub does not include the sas string but only Namespace (fqdn) Eventhubname Consumer Group Max Wait Time Max Batch Size Index Source Type Interval. In the log file I keep getting this error 2021-04-22 18:21:16,315 level=WARNING pid=22752 tid=Thread-1 logger=uamqp.receiver pos=receiver.py:get_state:270 | LinkDetach("ErrorCodes.UnauthorizedAccess: Unauthorized access. 'Listen' claim(s) are required to perform this operation. Resource: 'sb://*********-monitor.servicebus.windows.net/*********/consumergroups/$default/partitions/0'. TrackingId:************************** Any ideas as to where I should look?
Hi everyone, I have calculated a duration field like this for example Duration 00:22:02 00:19:26 00:04:26 00:20:16 00:16:47 with this search my_search | convert num("Duration") | stats sum("Durat... See more...
Hi everyone, I have calculated a duration field like this for example Duration 00:22:02 00:19:26 00:04:26 00:20:16 00:16:47 with this search my_search | convert num("Duration") | stats sum("Duration") as "Total" | eval "Total"=tostring($Total$,"duration") and I have the result of  this in a total of all my durations 5+17:02:53 I there a way to convert or transform this result in  5 days 17 hours 02 minutes 53 seconds or something like this would also be great: 5 days 17:02:53' Thank you very much!
Hello! I have a huge problem with map command, I tried to us an ACF (autocorrelation function) for more than 1 field. The main point is that I can not pass the field name using map command. Let me s... See more...
Hello! I have a huge problem with map command, I tried to us an ACF (autocorrelation function) for more than 1 field. The main point is that I can not pass the field name using map command. Let me show you an example: source="datos.csv" | table Logging_ERROR, User_ERROR | transpose | table column | rename column as col | map [search source="datos.csv" |table "$col$" | fit ACF "Logging_ERROR" k=1440 fft=false conf_interval=90 ] maxsearches=2000000 (NOT WORK) source="datos.csv" | table Logging_ERROR, User_ERROR | transpose | table column | rename column as col | map [search source="datos.csv" |table "$col$" | fit ACF "$col$" k=1440 fft=false conf_interval=90 ] maxsearches=2000000 (WORK) The error shown is: Error in 'fit' command: Error while fitting "ACF" model: No valid fields to fit or apply model to.   I don't want to write manually using append because I have a lot of them, I just tried to work with 2 fields in order to check if is it working. Anyone knows what is it happening? Is it an error? Is it possible to solve it?
Has anyone accomplished getting AWS Config Aggregator data into Splunk? Our Splunk infrastructure is entirely on-prem, and I haven't been able to figure out a way to use the 'Splunk Addon for AWS' to... See more...
Has anyone accomplished getting AWS Config Aggregator data into Splunk? Our Splunk infrastructure is entirely on-prem, and I haven't been able to figure out a way to use the 'Splunk Addon for AWS' to get the aggregated data into Splunk.  Similar issue as: https://community.splunk.com/t5/All-Apps-and-Add-ons/AWS-Config-Aggregators-in-Splunk-Add-on-for-AWS/m-p/531124 Thanks!
I am doing an inventory of all apps on my search head -  but one I have noticed is not listed - I have thrown the kitchen sink at it . I go to all configurations as the  TA  UFMA - Unified Forwarder... See more...
I am doing an inventory of all apps on my search head -  but one I have noticed is not listed - I have thrown the kitchen sink at it . I go to all configurations as the  TA  UFMA - Unified Forwarder Monitoring and Alerting for Splunk, I also see it in the Apps drop down menu, and I see it in the Managed Apps page. Here is the syntax I am using - is there a better search string I should be using to pick on all TA and Add-ons ? | rest /services/apps/local | search disabled=* | table label version    
We are currently using splunk cloud with a very low limit (100gb per day) but this is very expensive. We are migrating all our infrastructure over to k8 clusters with aws. We are possibly looking at ... See more...
We are currently using splunk cloud with a very low limit (100gb per day) but this is very expensive. We are migrating all our infrastructure over to k8 clusters with aws. We are possibly looking at moving away from Splunk due to pricing but we are looking at other pricing models rather than ingest pricing first and looking at workload based pricing. Has anyone got this license model and if so is there a way i could work out compute power in splunk so i can calculate a rough price? 
I am a Advanced beginner to splunk and i want to create custom app/addon in my search head cluster environment and push via deployer to all shc members Note: We have GUI disabled in our environment ... See more...
I am a Advanced beginner to splunk and i want to create custom app/addon in my search head cluster environment and push via deployer to all shc members Note: We have GUI disabled in our environment to create apps/addons through shc members Also how can i place app and tar and untar app/addon in shcluster/apps directory and push to shc members via deployer. Please help with steps for best practice
Hoping this isn't too basic of a question... How can I share a dashboard without user seeing apps, messages, settings, activity across the top. The dashboard is a search, and users in the role have ... See more...
Hoping this isn't too basic of a question... How can I share a dashboard without user seeing apps, messages, settings, activity across the top. The dashboard is a search, and users in the role have only search, rtsearch, and rest_get_properties capabilities. We'd prefer users in that role don't see the actual search app but especially not the tabs across the top mentioned above and here... Thanks in advance.    
Hello, I need a simple python script which writes data to Splunk and i should be able to see that data in Splunk using index. Thank you
Hi! I created a report, Set up a schedule for every hour.  Why is the scheduled report showing 0 results but running the same search run manually and yielding events?