All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am building custom dashboards with metrics from the F5 extension and wildcards. In the graphics, the name of the variable $ {n} or the automatic name is very long and because you have wil... See more...
Hello, I am building custom dashboards with metrics from the F5 extension and wildcards. In the graphics, the name of the variable $ {n} or the automatic name is very long and because you have wildcards you cannot set it fixed. Is there a way to take the last "n" characters or fields? Best regards
Hello, We are in the process of migrating our splunk 7.1.2 installation onto new hardware with the v 8.x. We currently have several reports created under the Cisco Security Suite App that we need to... See more...
Hello, We are in the process of migrating our splunk 7.1.2 installation onto new hardware with the v 8.x. We currently have several reports created under the Cisco Security Suite App that we need to migrate however this app is not officially supported and I was wondering what was the replacement. Specifically we rely on the ASA, ESA parts of this application. I have tried installing it in a lab environment and I have not detected any issue but I would prefer to stay within the supported boundaries. Thanks in advance for your help. Regard, Vincent
Hi there.  Our Security team requested this app, so we'd like to give it a try. We're in Splunk Cloud (managed, multi-tier, clustered, v8.0). Splunk Cloud Support installed the app and the script (... See more...
Hi there.  Our Security team requested this app, so we'd like to give it a try. We're in Splunk Cloud (managed, multi-tier, clustered, v8.0). Splunk Cloud Support installed the app and the script (and presumably curl call) appears to be getting executed successfully, but we get the following error message: 06-19-2020 18:03:47.502 ERROR script - sid:1592589699.48204_791B8AB7-1DA7-4625-BB26-A1D7AF2DC563 command="getwatchlist", Error fetching watch list: <urlopen error [Errno 110] Connection timed out> They suggested I post a request to the developer of the app, which I'm quoting below: Mon 6/22/2020 9:37 AM     [...] All i can see is that on the Python script there is a dictionary created to start the request (a post i guess) to the URL, the port set is 8080, perhaps that might be the issue. However we cannot do changes at script level, and even more, because this is app is not supported. I suggest you to contact the app developers, check the port or network needs that have to be set according to the error you are receiving and see if something has to be adjusted in our side, [...] Splunk Technical Support   Is there a specific configuration request we should make of Splunk Cloud Support to allow the curl call not to timeout?   Thanks in advance.
I need to filter the VMS by APP What I have is this Now i want to add more apps and based on selection in Application drop  down i want to filter the Environment drop down or (want to show sele... See more...
I need to filter the VMS by APP What I have is this Now i want to add more apps and based on selection in Application drop  down i want to filter the Environment drop down or (want to show selection only associated to Application drop down)   The full requirement is on selection of specific Application, the Vms shown on the Environment should only associated to selected     
I have data like 202-06-19T13:02:293 message="event(level=Error name=xyz)  context: { Id: 12345, locale: 'us' blah blah   My objective is to get error count by corresponding to Id . I have a c... See more...
I have data like 202-06-19T13:02:293 message="event(level=Error name=xyz)  context: { Id: 12345, locale: 'us' blah blah   My objective is to get error count by corresponding to Id . I have a csv say abc.csv from which I have to look up Id and display result only corresponding to the Id present in csv. moreover for some logs id is logged as field but for some it is not getting logged as field. I used below query:   index=rxc sourcetype="rxcapp" (level=ERROR) earliest=-30m | rex field=_raw "Id:[\S\s]+?(?<Id>.\d+)" | search [| inputlookup abc.csv | rename id as Id | fields Id]| lookup abc.csv id As Id OUTPUT site| stats count by name site level   It is giving me result correctly when I search but when I go and commit it on github it throws error like below :   REX FIELD checks for use of _raw FAILURE: in file local/searches.conf in section [ABC   Error alert] -> rex field cannot = _raw  Is there any way I can achieve what I want without using _raw and  "context" is also not logged as field in logs(fyi)
Hi, I am looking for solution to encircle the entire row with a red line instead of highlighting the table row. I have a dropdown and three panels in a dashboard. My dropdown will have the posting d... See more...
Hi, I am looking for solution to encircle the entire row with a red line instead of highlighting the table row. I have a dropdown and three panels in a dashboard. My dropdown will have the posting date like Today, Yesterday etc. in a format of YYYYMMDD. If I chose the value from the dropdown my first panel is the table which has the summary details for all the posting dates like below. According to my current solution if I chose the posting date (Picture 1), in below example I chose July 21st as posting date so it highlights the value in Red (Picture 2) which is my first panel. Instead of highlighting the specific cell as RED I want the entire row of the chosen posting date to be encircled by a RED rectangle line. Like shown in (Picture 3) below. Picture 1: Picture 2: Picture 3: Likewise, my second panel<Picture 4> populates along with the first panel, In the second table panel I have a columns like process, process_end_date job, job_description, sla_time. If I click on any value of the process column in panel 2, the third panel populates and currently i have the solution of highlighting the cell alone which can be seen in  <Picture 5>. Instead, it should encircle the entire row in third panel in RED like <picture 3> which has the matching process name for the selected posting date from drop-down.  Picture 4:   Picture 5:   @kamlesh_vaghela  will you be able to help here?
Hello All, We are unable to get the Palo Alto Add-on and App to parse the incoming syslog traffic correctly. The environment we are in prevents us from setting up the syslog-ng.conf exactly as the i... See more...
Hello All, We are unable to get the Palo Alto Add-on and App to parse the incoming syslog traffic correctly. The environment we are in prevents us from setting up the syslog-ng.conf exactly as the instructions have listed. We are trying to see if the modification that we have to make is causing the issues with the Add-on and App. The following will show what the instructions call for in the inputs and outputs; while I'll provide what we would have in place. Palo Alto Instructions: Under "Destinations" specify a .log file destination: destination d_udp514 { file("/YOURPATH/udp514.log" template("${MSG}\n")); }; Our Destinations: destination d_5400-A058-PaloAlto { file("/var/log/syslog-ng/PaloAlto.5400-A058.$YEAR.$MONTH.$DAY.log" owner("root") group("root") perm(0644)); }; destination d_5400-B170-PaloAlto { file("/var/log/syslog-ng/PaloAlto.5400-B170.$YEAR.$MONTH.$DAY.log" owner("root") group("root") perm(0644)); }; destination d_5400-PA220-PaloAlto { file("/var/log/syslog-ng/PaloAlto.5400-PA220.$YEAR.$MONTH.$DAY.log" owner("root") group("root") perm(0644)); }; ********************************* Palo Alto Instructions: Create or modify /opt/splunkforwarder/etc/system/local/inputs.conf and add a monitoring stanza: [monitor:///YOURPATH/udp514.log] sourcetype = pan:log Our inputs (using HF instead of UF): [monitor:///var/log/syslog-ng/PaloAlto.5400-A058.$YEAR.$MONTH.$DAY.log] sourcetype = pan:log *** Did not work, so used below *** [monitor:///var/log/syslog-ng()/PaloAlto*.log] sourcetype = pan:log
Hello, I have an issue with the Indexer not retaining logs for the expected period, and I'm really scratching my head.   This is from local/indexes.conf I have maxVolumeDataSizeMB configured on t... See more...
Hello, I have an issue with the Indexer not retaining logs for the expected period, and I'm really scratching my head.   This is from local/indexes.conf I have maxVolumeDataSizeMB configured on the volumes to provide ample storage.    [volume:hot] path = /var/splunk/db/hot maxVolumeDataSizeMB = 250000 [volume:cold] path = /var/splunk/db/cold maxVolumeDataSizeMB = 1100000     Lots of disk space free too.   df -h | grep splunk 250G 119G 132G 48% /var/splunk/db/hot 1.2T 136G 991G 13% /var/splunk/db/cold     I have various indexes with frozenTimePeriodInSecs configured for around 1 month/3months/1 year.   [main] homePath = volume:hot/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb frozenTimePeriodInSecs = 8640000 [history] homePath = volume:hot/historydb/db coldPath = volume:cold/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb [summary] homePath = volume:hot/summarydb/db coldPath = volume:cold/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb [_internal] homePath = volume:hot/_internaldb/db coldPath = volume:cold/_internaldb/colddb thawedPath = $SPLUNK_DB/_internaldb/thaweddb frozenTimePeriodInSecs = 7776000 # For version 6.1 and higher [_introspection] homePath = volume:hot/_introspection/db coldPath = volume:cold/_introspection/colddb thawedPath = $SPLUNK_DB/_introspection/thaweddb frozenTimePeriodInSecs = 7776000 # For version 6.5 and higher [_telemetry] homePath = volume:hot/_telemetry/db coldPath = volume:cold/_telemetry/colddb thawedPath = $SPLUNK_DB/_telemetry/thaweddb frozenTimePeriodInSecs = 7776000 [_audit] homePath = volume:hot/audit/db coldPath = volume:cold/audit/colddb thawedPath = $SPLUNK_DB/audit/thaweddb frozenTimePeriodInSecs = 7776000 [_metrics] homePath = volume:hot/metrics/db coldPath = volume:cold/metrics/colddb thawedPath = $SPLUNK_DB/metrics/thaweddb frozenTimePeriodInSecs = 7776000 [_thefishbucket] homePath = volume:hot/fishbucket/db coldPath = volume:cold/fishbucket/colddb thawedPath = $SPLUNK_DB/fishbucket/thaweddb [Cisco] homePath = volume:hot/cisco/db coldPath = volume:cold/cisco/colddb thawedPath = $SPLUNK_DB/cisco/thaweddb frozenTimePeriodInSecs = 3456000 [Windows] homePath = volume:hot/windows/db coldPath = volume:cold/windows/colddb thawedPath = $SPLUNK_DB/windows/thaweddb frozenTimePeriodInSecs = 31536000 [Linux] homePath = volume:hot/linux/db coldPath = volume:cold/linux/colddb thawedPath = $SPLUNK_DB/linux/thaweddb frozenTimePeriodInSecs = 31536000 [solaris] homePath = volume:hot/solaris/db coldPath = volume:cold/solaris/colddb thawedPath = $SPLUNK_DB/solaris/thaweddb frozenTimePeriodInSecs = 31536000 [db] homePath = volume:hot/db/db coldPath = volume:cold/db/colddb thawedPath = $SPLUNK_DB/db/thaweddb frozenTimePeriodInSecs = 8640000 [Antivirus] homePath = volume:hot/antivirus/db coldPath = volume:cold/antivirus/colddb thawedPath = $SPLUNK_DB/antivirus/thaweddb frozenTimePeriodInSecs = 8640000 [Mail] homePath = volume:hot/mail/db coldPath = volume:cold/mail/colddb thawedPath = $SPLUNK_DB/mail/thaweddb frozenTimePeriodInSecs = 8640000 [Test] homePath = volume:hot/test/db coldPath = volume:cold/test/colddb thawedPath = $SPLUNK_DB/test/thaweddb frozenTimePeriodInSecs = 604800 [msexchange] homePath = volume:hot/msexchange/db coldPath = volume:cold/msexchange/colddb thawedPath = $SPLUNK_DB/msexchange/thaweddb frozenTimePeriodInSecs = 8640000 [perfmon] homePath = volume:hot/perfmon/db coldPath = volume:cold/perfmon/colddb thawedPath = $SPLUNK_DB/perfmon/thaweddb frozenTimePeriodInSecs = 8640000 [wineventlog] homePath = volume:hot/wineventlog/db coldPath = volume:cold/wineventlog/colddb thawedPath = $SPLUNK_DB/wineventlog/thaweddb frozenTimePeriodInSecs = 8640000 [msad] homePath = volume:hot/msad/db coldPath = volume:cold/msad/colddb thawedPath = $SPLUNK_DB/msad/thaweddb frozenTimePeriodInSecs = 8640000 [proxy] homePath = volume:hot/proxy/db coldPath = volume:cold/proxy/colddb thawedPath = $SPLUNK_DB/proxy/thaweddb frozenTimePeriodInSecs = 8640000 [servicedesk] homePath = volume:hot/servicedesk/db coldPath = volume:cold/servicedesk/colddb thawedPath = $SPLUNK_DB/servicedesk/thaweddb frozenTimePeriodInSecs = 8640000 [fortigate] homePath = volume:hot/fortigate/db coldPath = volume:cold/fortigate/colddb thawedPath = $SPLUNK_DB/fortigate/thaweddb frozenTimePeriodInSecs = 8640000 [cloudflare] homePath = volume:hot/cloudflare/db coldPath = volume:cold/cloudflare/colddb thawedPath = $SPLUNK_DB/cloudflare/thaweddb frozenTimePeriodInSecs = 8640000 [environmental] homePath = volume:hot/environmental/db coldPath = volume:cold/environmental/colddb thawedPath = $SPLUNK_DB/environmental/thaweddb frozenTimePeriodInSecs = 8640000 [o365] homePath = volume:hot/o365/db coldPath = volume:cold/o365/colddb thawedPath = $SPLUNK_DB/o365/thaweddb frozenTimePeriodInSecs = 8640000 [vulnmgmt] homePath = volume:hot/vulnmgmt/db coldPath = volume:cold/vulnmgmt/colddb thawedPath = $SPLUNK_DB/vulnmgmt/thaweddb frozenTimePeriodInSecs = 31536000 homePath.maxDataSizeMB = 500 coldPath.maxDataSizeMB = 2000 [desktopcentral] homePath = volume:hot/desktopcentral/db coldPath = volume:cold/desktopcentral/colddb thawedPath = $SPLUNK_DB/desktopcentral/thaweddb frozenTimePeriodInSecs = 31536000 homePath.maxDataSizeMB = 500 coldPath.maxDataSizeMB = 2000 [f5] homePath = volume:hot/f5/db coldPath = volume:cold/f5/colddb thawedPath = $SPLUNK_DB/f5/thaweddb frozenTimePeriodInSecs = 8640000 [misc] homePath = volume:hot/misc/db coldPath = volume:cold/misc/colddb thawedPath = $SPLUNK_DB/misc/thaweddb frozenTimePeriodInSecs = 8640000     It seems all the indexes are only storing the last 10 days in the colddb.   ls -l /var/splunk/db/cold/linux/colddb/ total 40 drwx--x---. 3 splunk splunk 4096 Jun 12 16:32 db_1591260601_1591138980_1306 drwx--x---. 3 splunk splunk 4096 Jun 13 19:07 db_1591350601_1591259378_1307 drwx--x---. 3 splunk splunk 4096 Jun 15 08:09 db_1591441801_1591311780_1308 drwx--x---. 3 splunk splunk 4096 Jun 16 09:49 db_1591525801_1591440698_1309 drwx--x---. 3 splunk splunk 4096 Jun 17 10:09 db_1591620001_1591513320_1310 drwx--x---. 3 splunk splunk 4096 Jun 18 03:19 db_1591702201_1590591241_1311 drwx--x---. 3 splunk splunk 4096 Jun 19 01:26 db_1591783801_1591657380_1312 drwx--x---. 3 splunk splunk 4096 Jun 19 22:07 db_1591858861_1591743780_1314 drwx--x---. 3 splunk splunk 4096 Jun 20 22:59 db_1591936201_1591857301_1315 drwx--x---. 3 splunk splunk 4096 Jun 22 05:11 db_1592013661_1591916580_1316     Have I missed something? From my understanding in the docs, I only need to configure maxVolumeDataSizeMB to define the storage capacity, and frozenTimePeriodInSecs for how long logs are kept in cold storage until moved to frozen (deleted). Thanks
We're creating an app which uses loadjob, however loadjob requires  savedsearch="<owner>:<app>:<saved search name>" In the app, the other fields are ok but the owner would need to change as you m... See more...
We're creating an app which uses loadjob, however loadjob requires  savedsearch="<owner>:<app>:<saved search name>" In the app, the other fields are ok but the owner would need to change as you move from testing the app (with a named account) to nobody within the packaged app Is there a way to use loadjob within an app without hardcoding owner:app:savedsearch , use dynamic variables or avoid loadjob altogether with an alternative? BTW: why loadjob? long running report and it is running as a subsearch using a different time range to main search making this a scheduled report
Hi All, (Environment) -Splunk8.0 Cloud/Splunk Heavy forwarder)   I have an alert configured to give a weekly report for all windows servers (a mixture of windows server 2012 and 2016) for windows ... See more...
Hi All, (Environment) -Splunk8.0 Cloud/Splunk Heavy forwarder)   I have an alert configured to give a weekly report for all windows servers (a mixture of windows server 2012 and 2016) for windows updates. When an update installs on a server we get the report emailed to us weekly.  We get verification that the windows updates got installed on all servers, except for 3 domain controllers (Windows Server 2016 domain).   Could someone look at this search string and let me know if there is something missing, or should I be doing a different search criteria?  Thanks in advance *************************************************************************************************** tag=Windows_Update package=* | dedup package, host | eval status=if(eventtype=="Update_Successful", "Success", if(eventtype=="Update_Failed", "Failed", "NA")) | search NOT status="NA"  | stats latest(_time) as ltime, count by status, host, package | convert ctime(ltime) | eval lsuccess="Succesful at (".ltime.")" | eval lfail="Failed at (".ltime.")" | eval lstatus=if(status=="Success",lsuccess,lfail) | stats values(lstatus) as Status_History by host, package | sort host,package | eval scount=mvcount(Status_History) | eval Last_Status=if(scount>1,"Success",if(match(Status_History, "Success*"),"Success","Failed")) | table host, package, Last_Status, Status_History | sort host,package *********************************************************************************************   Bob
I have numeric data. I'd like to group the data. It is easy to use 'Kmeans' command, but it cannot be necessarily k=3. I want to set k automatically. Or Is there any other good idea to group? ex... See more...
I have numeric data. I'd like to group the data. It is easy to use 'Kmeans' command, but it cannot be necessarily k=3. I want to set k automatically. Or Is there any other good idea to group? ex) 53,752 53,731 53,699 10,427 10,437 110,854 111,054 111,001 ... result) 53,752 1 53,731 1 53,699 1 10,427 2 10,437 2 110,854 3 111,054 3 111,001 3 .....    
Hi ALl, I need help in applying color on icons of status, when i use only icons its works with color on the Javascript but if i added the text value with color its not working. @kamlesh_vaghela  cou... See more...
Hi ALl, I need help in applying color on icons of status, when i use only icons its works with color on the Javascript but if i added the text value with color its not working. @kamlesh_vaghela  could you pls help I want to display results with text and icon together in a same column Output: Information (ICON) Error(ICON) Javascript&colon;     require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { //Translations from rangemap results to CSS class var ICONS = { Critical: 'check-circle', Error: 'alert', Warning: 'check-circle', Debug: 'check-circle', Information: 'check-circle', Trace: 'check-circle', None: 'check-circle', }; var CustomIconRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { //Only use the cell renderer for the range field return _(['Event Level']).contains(cell.field); }, render: function($td, cell) { var icon = 'question'; //Fetch the icon for the value if (ICONS.hasOwnProperty(cell.value)) { icon = ICONS[cell.value]; } //Create the icon element and add it to the table cell $td.addClass('icon').html(_.template('<%- text %> <i class="icon-<%-icon%>"></i>', { icon: icon, text: cell.value })); } }); mvc.Components.get('tbl1').getVisualization(function(tableView) { //Register custom cell renderer, the table will re-render automatically tableView.addCellRenderer(new CustomIconRenderer()); }); });       CSS:     /* Custom Icons */ td.icon { text-align: center; } td.icon i { font-size: 25px; text-shadow: 1px 1px #aaa; } td.icon .Critical { color: #F70000; } td.icon .Error { color: #EEFA33; } td.icon .Warning { color: #006400; } td.icon .Debug { color: #006400; } td.icon .Information { color: #006400; } td.icon .Trace { color: #006400; } td.icon .None { color: #006400; }      
How do I configure Splunk Stream to capture ALL traffic on a specific network interface? I taking traffic from a Gigamon host. Here's what I have so far. [streamfwd://streamfwd] streamfwdcapture.0.... See more...
How do I configure Splunk Stream to capture ALL traffic on a specific network interface? I taking traffic from a Gigamon host. Here's what I have so far. [streamfwd://streamfwd] streamfwdcapture.0.interface = Ethernet0 streamfwdcapture.0.offline = false disabled = 0 index = GigamonEthernet0   Will the above get the job done?  Thanks in advance for your help.
Hi there, Just a quick question as I am not familiar with some basic routines yet.. We use a "ms:iis:auto" to ingest a basic IIS log file (W3C formatted), nothing special except it is a sharepoint ... See more...
Hi there, Just a quick question as I am not familiar with some basic routines yet.. We use a "ms:iis:auto" to ingest a basic IIS log file (W3C formatted), nothing special except it is a sharepoint website. After some time, we looked at the data in splunk, and we see this (raw format):     192.168.2.72 GET /_vti_bin/client.svc/web/title - 443 0#.w|domain\login 192.168.52.48 Mozilla/5.0        After some research, it seems that IIS log file can also contain/encode claims (here it is "0#.w|"): https://social.technet.microsoft.com/wiki/contents/articles/13921.sharepoint-20102013-claims-encoding.aspx Is it possible to remove this claim (or even better map it), knowing it is not always present ? That would allow us to see the user field in the proper format in splunk and not: "0#w|domain\user" Have a good day,
HI All, I am struggling with a query where i have made the data like the following Type _time Store Counts Type1 22/06/2020 11:00 Store1 10 Type1 22/06/2020 11:00 Store2 20 Typ... See more...
HI All, I am struggling with a query where i have made the data like the following Type _time Store Counts Type1 22/06/2020 11:00 Store1 10 Type1 22/06/2020 11:00 Store2 20 Type1 22/06/2020 11:00 Store3 30 Type2 22/06/2020 11:00 Store1 100 Type2 22/06/2020 11:00 Store2 200 Type2 22/06/2020 11:00 Store3 300   And I need it to be like the below. Any help on this please ? Type _time Store1 Store2 Store3 Type1 22/06/2020 11:00 10 20 30 Type2 22/06/2020 11:00 100 200 300  
Hi! I am trying to get a duration of the project by calculation the number of days between today and the project submit date. They are both in the same format (mm/dd/yy), but there are no results.  W... See more...
Hi! I am trying to get a duration of the project by calculation the number of days between today and the project submit date. They are both in the same format (mm/dd/yy), but there are no results.  What am I doing wrong?   | eval "Project Submit Date"=strftime(strptime(project_submit_date,"%Y-%m-%d %H:%M:%S"), "%m-%d-%y") | eval today=strftime(now(), "%m-%d-%y") | eval "Duration"=(today-'Project Submit Date')/86400 Thank you very much!
I am trying to write a correlation search where I want that if any of host from my internal network (10.0.0.0/8) as a source or destination communicates to any host exist inside the list of blacklist... See more...
I am trying to write a correlation search where I want that if any of host from my internal network (10.0.0.0/8) as a source or destination communicates to any host exist inside the list of blacklist subnet/ip address as mentioned below: 47.114.37.0/24 49.85.84.0/24 61.111.20.129/32 62.217.245.69/32 109.166.202.229/32    
Hi, we are tring to forward only the MAIN index from an indexer to a third-party systems using TCP. I've seen the documentation (Forward data to third-party systems/Route and filter data) and some ... See more...
Hi, we are tring to forward only the MAIN index from an indexer to a third-party systems using TCP. I've seen the documentation (Forward data to third-party systems/Route and filter data) and some community articles. Unfortunatly we still get other indexes (e.g. fortinet) forwarded also. Any idea what we make wrong ? The last try from the ..\system\local\outputs.conf: ## 21.6.2020 [tcpout] defaultGroup = slms indexAndForward = true forwardedindex.0.whitelist = #forwardedindex.1.blacklist = (_.*|fortinet) forwardedindex.1.blacklist = forwardedindex.2.whitelist = forwardedindex.0.whitelist = main forwardedindex.filter.disable = false #[indexAndForward] #index=true [tcpout:slms] server = 192.168.249.140:514 sendCookedData = false blockOnCloning = false #forwardedindex.0.whitelist = #forwardedindex.1.blacklist = #forwardedindex.2.whitelist = #forwardedindex.2.whitelist = main #forwardedindex.filter.disable = false #
i need to create new column which is Combining of Static text and Dynamic id. in column id values will change as per time. i need to combine both static and Dynamic value to create new filed. if you... See more...
i need to create new column which is Combining of Static text and Dynamic id. in column id values will change as per time. i need to combine both static and Dynamic value to create new filed. if you click on it has to open corresponding URL. Example: id         URL                                       newcolumn 12      https:abc.bcd             https:abc.bcd/12 23      https:abc.bcd             https:abc.bcd/23    
Hi, I have a performance issue with a query using a "join" command. The problem is that the first search using a time picker on last 4 hours, and the search in the join (type outer) hook using "earl... See more...
Hi, I have a performance issue with a query using a "join" command. The problem is that the first search using a time picker on last 4 hours, and the search in the join (type outer) hook using "earliest=-30d" Example of the query :  index="A" sourcetype="AB" source="C"  | eval launch_time=round(strptime(launch_time, "%Y-%m-%dT%H:%M:%S"),0) | eval search_time=now() | eval launched_since=round((search_time-launch_time)/86400,0) | where launched_since > 7 | dedup id sortby -_time | lookup all_ids account_id OUTPUT acc_name site | site=* | join type=outer id [ search index="A" sourcetype="AC" source="D" earliest=-30d | lookup all_ids account_id OUTPUT acc_name site | site=* | rename agentId as id | dedup rpg id | sort rpg  | stats values(rpg_name) as pg by id acc_name site | eval Name=if(like(pg,"%name1/%"),"Name1","Name2") | table id title platform pg Name] | table site acc_name id pg Name launched_since  | dedup acc_name id | eval Name2=if(isnull(Name), "NULL", Name) | stats count(id) as count by Name2   Is it possible to make a search more efficient ? Thanks in advance !