All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This didn't work...
If you need to preserve the original field then you aren't renaming.  Use the eval function to create a new field based on the old one. | eval NewIDs = case(OriginalIDs="P1D", "Popcorn", ... See more...
If you need to preserve the original field then you aren't renaming.  Use the eval function to create a new field based on the old one. | eval NewIDs = case(OriginalIDs="P1D", "Popcorn", OriginalIDs="B4D", "Banana", OriginalIDs="O5D", "Opp", 1==1, OriginalIDs)  
Splunk heavy forwarders (and indexers) send data to third-party services in syslog format only.  They do not (and cannot) listen to data on a search head (SHs do not have data). Place heavy forwarde... See more...
Splunk heavy forwarders (and indexers) send data to third-party services in syslog format only.  They do not (and cannot) listen to data on a search head (SHs do not have data). Place heavy forwarders in front of your indexers to route data both to the indexers and to ELK.  See https://docs.splunk.com/Documentation/Splunk/9.2.1/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system for more information. Be aware that this is a bit of a Science Project.  Splunk does not make it easy to switch to a competing product.
  There are many factors that could cause performance issues in your prod environment that wasn’t in the dev environment, production normally has more data and many other variables to that could cau... See more...
  There are many factors that could cause performance issues in your prod environment that wasn’t in the dev environment, production normally has more data and many other variables to that could cause issues.   Splunk is a workhorse, it needs CPU/Memory/Disk resources and other factors to be in place. Things to consider Has the environment been sized according for production? Is the disk on fast type of disks SSD etc Are  there lots of user’s run running the same search and for all time and at the same time? Do you have indexer clustering or is it Splunk All in One The Add-ons (TA’s) normally provide parsing and other knowledge objects  and potentially it could impact the environment with regex processing as an example.  The Splunk apps on the other hand have searches and dashboards that could potentially impact with long running searches. But normally it’s down to the Splunk sizing or something based on the environment. I don’t re-call a TA ever causing performance issues in the PROD environment, but it could happen I guess. I suggest: Use the  Monitoring Console for the production environment, this is a good place to start to check the performance issues. Check CPU/Memory on the SH and Indexers first. Check the Searches Run and Search Memory usage,  using the  MC. If you removed the TA does it improve and re-install, does it get bad again? If that all fails then perhaps look at logging a support call.   Monitoring Console https://docs.splunk.com/Documentation/Splunk/9.2.1/DMC/DMCoverview Splunk Sizing guide https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Sizing_your_Splunk_architecture
I would like to rename the field values that exist in one column and add them into their own separate column while keeping the original column (with the values before they were renamed) to show how t... See more...
I would like to rename the field values that exist in one column and add them into their own separate column while keeping the original column (with the values before they were renamed) to show how they map to the new values in the new column. The idea is if I have a list of IDs (original) that I want to map to different names in a separate column that represent those original IDS (basically Aliases) but want to keep both of the columns in a list view, how would I go about doing that? Example: Display Original IDs NewIDs P1D Popcorn B4D Banana O5D Opp
Hi All, I have a message filed having multiple success messages .I am using stats values(message) as message .So i want to show any one of the success messages in the output.For that i used below qu... See more...
Hi All, I have a message filed having multiple success messages .I am using stats values(message) as message .So i want to show any one of the success messages in the output.For that i used below query to restrict the other message values using mvdedup. But its not filtering. | eval Result=mvdedup(mvfilter(match(message, "File put Succesfully*") OR match(message, "Successfully created file data*") OR match(message, "Archive file processed successfully*") OR match(message, "Summary of all Batch*") OR match(message, "processed successfully for file name*") OR match(message, "ISG successful Call*") OR match(message, "Inbound file processed successfully") OR match(message, "ISG successful Call*") ) )   
  I have created a search that contains a field that is unique. I am using this search to populate the index. however for some reason when I try and check to see if the record is in the index it doe... See more...
  I have created a search that contains a field that is unique. I am using this search to populate the index. however for some reason when I try and check to see if the record is in the index it doesn't work for me. The closest I have come is this: | localop | rest .... ```first search key field``` |eval soar_uuid= id+"_RecordedFuture" |append [search index=rf-alerts soar_uuid|rename soar_uuid as ExistingKey] | table soar_uuid,triggered,rule.name,title,classification,url,ExistingKey The above returns  a list of new records with a blank ExistingKey field, and matching keys for soar_uuid  of existing records with a blank soar_uuid field.  If I could just populate either with the other field, then I could remove all the duplicates. I want to remove the new records that match the existing records before writing the events to the index. appendsearch instead of append doesn't seem to return the existing records. 
Just in a situation where I have 2 servers, where 1 is active and the other is passive. I had to deploy the TA on both the servers and report the service status of a service. So the active server ... See more...
Just in a situation where I have 2 servers, where 1 is active and the other is passive. I had to deploy the TA on both the servers and report the service status of a service. So the active server would be reporting the service is "Running" and the passive server would say the service is "stopped" I have tried writing up a SPL but my only worry is if there is a situation when the service stops on the active server how to get it reported. or if there is no data from the active server. There should be atleast 1 server reporting the service is "Running" always. Only during the DR situation the server name would change index=mday source="service_status.ps1" sourcetype=service_status os_service="App_Service" host=*papp01 | stats values(host) AS active_host BY status | where status=="Running" | append [ search index = mday source =service_status.ps1 sourcetype = service_status os_service="App_Service" host=*papp01 | stats latest(status) AS status by host,os_service,service_name ] | filldown active_host | where active_host=host AND status!="Running" | table host,active_host,os_service,service_name,status   Any help is much appreciated
thank you @renjith_nair  this is fine for what I need   
Thank you for the information @hrawat ! Do I understand correctly that the "backports" are coming with the major release, as you said "conf release" - so around the Splunk conference, June 11-14?
From the MC run the below - it should give you a starting point index=_internal source=*metrics.log group=tcpin_connections fwdType=uf hostname=* | eval hostname=lower(hostname) | fields _time hos... See more...
From the MC run the below - it should give you a starting point index=_internal source=*metrics.log group=tcpin_connections fwdType=uf hostname=* | eval hostname=lower(hostname) | fields _time hostname sourceIp arch destPort fwdType os ssl version | table _time hostname sourceIp arch destPort fwdType os ssl version | dedup hostname
Its a very generic questions You have a plenty of possibilities in network area  Authentication Firewalls Proxy WAF Perimeter securities Loadbalancing and so on Have a look at  https://w... See more...
Its a very generic questions You have a plenty of possibilities in network area  Authentication Firewalls Proxy WAF Perimeter securities Loadbalancing and so on Have a look at  https://www.splunk.com/en_us/blog/learn/network-security.html and https://www.splunk.com/en_us/blog/learn/network-monitoring.html  and probably you should get something to start with  
https://<controller_FQDN>/controller/rest/applications https://<controller_FQDN>/controller/rest/applications/<application_id>/tiers Eg. https://example.com/controller/rest/applications/1234/tiers ... See more...
https://<controller_FQDN>/controller/rest/applications https://<controller_FQDN>/controller/rest/applications/<application_id>/tiers Eg. https://example.com/controller/rest/applications/1234/tiers 1234 is application ID
@Trusty  You can use the lookup to enrich the dataset and then filter based on the value |makeresults |eval dscip="192.168.1.1 192.168.2.2 192.168.1.2"|makemv dscip| mvexpand dscip |rename comment ... See more...
@Trusty  You can use the lookup to enrich the dataset and then filter based on the value |makeresults |eval dscip="192.168.1.1 192.168.2.2 192.168.1.2"|makemv dscip| mvexpand dscip |rename comment as "Above is just data generation" |lookup lookup.csv system-ip as dscip OUTPUT system-alias as env |where env = "prod"
@vananhnguyen  For that we need to know what value comes to map it with a color.  We can transpose the result and set the result as column values and set the colors. Please check the following run... See more...
@vananhnguyen  For that we need to know what value comes to map it with a color.  We can transpose the result and set the result as column values and set the colors. Please check the following run anywhere example. makeresults is just used to create a set of dummy data { "visualizations": { "viz_PKMJkTej": { "type": "splunk.column", "options": { "y": "> primary | frameBySeriesNames('count','Critical','Failure','Info','Success')", "seriesColorsByField": { "Critical": "#dc4e41", "Failure": "#f8be34", "Success": "#53a051", "Info": "#0051B5" }, "x": "> primary | seriesByName('count')", "y2": "> primary | frameBySeriesNames('Critical','Failure','Info','Success')" }, "dataSources": { "primary": "ds_Lmyq9G4p" } } }, "dataSources": { "ds_Lmyq9G4p": { "type": "ds.search", "options": { "query": "| makeresults count=100 \n| eval value=random() \n| eval status=case(value%2==0,\"Success\",value%3==0,\"Failure\",value%4==0,\"Warning\",value%5==0,\"Critical\",1==1,\"Info\") \n| stats count by status \n| transpose header_field=status column_name=count" }, "name": "Search_1" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "absolute", "options": { "width": 1440, "height": 960, "display": "auto" }, "structure": [ { "item": "viz_PKMJkTej", "type": "block", "position": { "x": 0, "y": 0, "w": 1010, "h": 300 } } ], "globalInputs": [ "input_global_trp" ] }, "description": "", "title": "Static Colors" }
Helo I have a search query like this: index=test dscip=192.168.1.1 OR dscip=192.168.1.2 ... I would like to search this list of ip based on system-alias in my lookup This is my sample lookup.csv: ... See more...
Helo I have a search query like this: index=test dscip=192.168.1.1 OR dscip=192.168.1.2 ... I would like to search this list of ip based on system-alias in my lookup This is my sample lookup.csv: system-alias system-ip prod 192.168.1.1 dev 192.168.2.2 prod 192.168.1.2   so what a search query should look like if i want to serach only for prod ip`s ?   P
Dear team, May I know why there is no further version has been released for this Splunk Application (Splunk App for Jenkins) since 2020? This is a fantastic App useful for visualising the Jenki... See more...
Dear team, May I know why there is no further version has been released for this Splunk Application (Splunk App for Jenkins) since 2020? This is a fantastic App useful for visualising the Jenkin Build status, Access log and other statistical data.. Could you please check and confirm.. Thanks.
Using left join, you should get 1000 events from the first part of the search (left and outer mean the same thing). The where command would strip out events which didn't match, but you already said t... See more...
Using left join, you should get 1000 events from the first part of the search (left and outer mean the same thing). The where command would strip out events which didn't match, but you already said that the 1000 from the first / left side of the join match with 1000 from the second / right side of the join., so I would not expect it to remove any events.
this is an example to make results : | makeresults format=json data="[{\"browsers\":{\"0123456\":{\"id\":\"0123456\",\"fullName\":\"blahblah\",\"name\":\"blahblah\",\"state\":0,\"lastResult\":{\"suc... See more...
this is an example to make results : | makeresults format=json data="[{\"browsers\":{\"0123456\":{\"id\":\"0123456\",\"fullName\":\"blahblah\",\"name\":\"blahblah\",\"state\":0,\"lastResult\":{\"success\":1,\"failed\":2,\"skipped\":3,\"total\":4,\"totalTime\":5,\"netTime\":6,\"error\":true,\"disconnected\":true},\"launchId\":7}},\"result\":{\"0123456\":[{\"id\":8,\"description\":\"blahblah\",\"suite\":[\"blahblah\",\"blahblah\"],\"fullName\":\"blahblah\",\"success\":true,\"skipped\":true,\"time\":9,\"log\":[\"blahblah\",\"blahblah\"]}]},\"summary\":{\"success\":10,\"failed\":11,\"error\":true,\"disconnected\":true,\"exitCode\":12}}]"
Hello, Cisco add-on v. 2.7.3  slows a lot our Splunk Enterprise production platform when it is activated. The research "index=xxxxx sourcetype=cisco:ios" goes from a few ms on our development platfo... See more...
Hello, Cisco add-on v. 2.7.3  slows a lot our Splunk Enterprise production platform when it is activated. The research "index=xxxxx sourcetype=cisco:ios" goes from a few ms on our development platform to more than 1 hour on our production platform. Do you know if any configuration in the add-on could affect the performances of some operations that could fully depend on the the platform configuration?   Thanks a lot for your suggestions!