All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a dashboard (form) that I'm trying to allow a text field to accept single values or comma separated values that will be replaced by "* OR" right now when I first start up the dashboard and ent... See more...
I have a dashboard (form) that I'm trying to allow a text field to accept single values or comma separated values that will be replaced by "* OR" right now when I first start up the dashboard and enter a single value, it just stays at "Search is waiting for input.." if I enter comma separated values it will run the search. I tried rapping the eval in a if statement to check for commas, but that didn't seem to do anything. Here is what I currently have: <input type="text" token="clientTransactionId" searchWhenChanged="true" id="webOrder"> <label>Transaction Id (for multiple use csv's):</label> <change> <eval token="clientTransactionId_formatted">if(like($clientTransactionId$, "%,%"),replace($clientTransactionId$,",","* OR"),$clientTransactionId$)</eval> </change> </input> And here is what I tried before: <eval token="clientTransactionId_formatted">replace($clientTransactionId$,",","* OR")</eval> How can I set this up to run? I need it to eval before search, there are too many transactions to filter after search has ran. Thanks!
I have a search that looks at the output of a few scripts and lets me know if they are not running. These scripts cover our data collection for the instance, from our data lake. For example if th... See more...
I have a search that looks at the output of a few scripts and lets me know if they are not running. These scripts cover our data collection for the instance, from our data lake. For example if the asset script fails I like to see something in status field showing "Asset Collector broke!" in the status field. I want to create a status field to update for each scenario of the 3 different scenarios based on the avg number of results returned. This is what I tried so far in various forms, thanks for the help. basesearch|stats avg(assets_collected) as ac avg(metrics_returned) as metrics avg(no_metrics) as not_returned |eval status=case(ac==0, "Asset Collector broke!", assets==not_returned,"Cron Failure!",metrics==0,"Data Lake Failure!"
DB inputs can not be saved after migrating the Splunk DB Connect app from a test environment to another server, it keeps throwing a "Error in 'dbxquery' command: External search command exited unexpe... See more...
DB inputs can not be saved after migrating the Splunk DB Connect app from a test environment to another server, it keeps throwing a "Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1" when trying to execute the Query. However the query runs perfectly on SQL Developer.
Hello Everyone. I have a traceability report as below Parent Child A B A C B D C E C F Where as I have create the link traceability as Parent Son Grandson A B ... See more...
Hello Everyone. I have a traceability report as below Parent Child A B A C B D C E C F Where as I have create the link traceability as Parent Son Grandson A B D A C E A C F I am struggling to identify the base query for this use case, can anyone suggest ?
Hi, we have from a cisco ISE a syslog like this one: calling-Station-ID=15.15.15.15, NAS-Port-Type=Virtual, Tunnel-Client-Endpoint=(tag=0) 95.90.193.23, cisco-av-pair=mdm-tlv=device-platform=wi... See more...
Hi, we have from a cisco ISE a syslog like this one: calling-Station-ID=15.15.15.15, NAS-Port-Type=Virtual, Tunnel-Client-Endpoint=(tag=0) 95.90.193.23, cisco-av-pair=mdm-tlv=device-platform=win, cisco-av-pair=mdm-tlv=device-mac=cc-ed-ce-f9-af-47, cisco-av-pair=mdm-tlv=device-platform-version=6.3.9600 , cisco-av-pair=mdm-tlv=device-type=FUJITSU LIFEBOOK E744, cisco-av-pair=mdm-tlv=device-public-mac=ab-bd-ce-f9-af-47, cisco-av-pair=mdm-tlv=ac-user-agent=AnyConnect Windows 4.6.01103, cisco-av-pair=mdm-tlv=device-uid=C3FFF95AFDEE9CBA21839EA8D594D7F87165993CE2C8853A262179F90AC70167, The key=value extraction works fine. But I have a multi-value field called cisco-av-pair containing these values: .. mdm-tlv=device-platform-version=6.3.9600 mdm-tlv=device-uid=C3.. .. I would like to have the mdm-tlv prefix cut-off and have the key/value extraction on the subfields. At the end there should be these single-value fields: device-platform-version , device-uid, device-platform with the corresponding values. How is that done in props/transforms? I know I can write a regex doing this stuff, but a more generic way without explicit naming the fields would be fine.
Can someone please help me make this search as efficient as possible? I am trying to make a Base ID Search and have all of the panels run off of it. One of the panels happens to be a report because I... See more...
Can someone please help me make this search as efficient as possible? I am trying to make a Base ID Search and have all of the panels run off of it. One of the panels happens to be a report because I needed to accelerate it as it's a 24 hour report. Some fields are also dynamic. Have I reached a limitations or is it possible to have a Base Search and still be able to make fields and panels dynamic? <form> <label>Allowed Internet Traffic (Inside to Outside) 4/15</label> <description>Source IPs are only Internal IPs. Internal IPs excluded from the Destination. Excludes 10.#.#.# from SrcIP</description> <fieldset submitButton="false"> <input type="time" searchWhenChanged="true"> <label>Time:</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <input type="text" token="SrcIP" searchWhenChanged="true"> <label>Src IP</label> <default>*</default> </input> <input type="text" token="DstIP"> <label>Dst IP</label> <default>*</default> </input> </fieldset> <row> <panel> <single> <title>Firepower Allowed Packets</title> <search> <query>index=sourcefire sourcetype="cisco:sourcefire:appliance:syslog" AccessControlRule!=Block SrcIP="10.0.*" OR SrcIP="172.*" OR SrcIP="192.168.#.*" AND DstIP!="10.0.*" AND DstIP!="172.*" AND DstIP!="192.168.#.*" AND SrcIP!="10.0.#.#" AND DstIP!="8.8.8.8" AND DstIP!="208.67.222.222" AND DstIP!="208.67.220.220" AND DstIP!="208.67.222.220" AND DstIP!="208.67.220.222" SrcIP=$SrcIP$ DstIP=$DstIP$ | stats count</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="refresh.auto.interval">1180</option> <option name="refresh.display">progressbar</option> </single> </panel> <panel> <table> <title>Firepower Allowed Packets Top 5 Source/Dest/Port IP</title> <search> <query>index=sourcefire sourcetype="cisco:sourcefire:appliance:syslog" AccessControlRule!=Block SrcIP="10.0.*" OR SrcIP="172.*" OR SrcIP="192.168.#.*" AND DstIP!="10.0.*" AND DstIP!="172.*" AND DstIP!="192.168.#.*" AND SrcIP!="10.0.#.#" AND DstIP!="8.8.8.8" AND DstIP!="208.67.222.222" AND DstIP!="208.67.220.220" AND DstIP!="208.67.222.220" AND DstIP!="208.67.220.222" SrcIP=$SrcIP$ DstIP=$DstIP$ | iplocation DstIP | stats count by SrcIP, DstIP, Country, DstPort | sort - by count | head 5</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="percentagesRow">false</option> <option name="refresh.auto.interval">1180</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">false</option> </table> </panel> <panel> <table> <title>Firepower Allowed Packets Top 5 Source IP</title> <search> <query>index=sourcefire sourcetype="cisco:sourcefire:appliance:syslog" AccessControlRule!=Block SrcIP="10.0.*" OR SrcIP="172.*" OR SrcIP="192.168.#.*" AND DstIP!="10.0.*" AND DstIP!="172.*" AND DstIP!="192.168.#.*" AND SrcIP!="10.0.#.#" AND DstIP!="8.8.8.8" AND DstIP!="208.67.222.222" AND DstIP!="208.67.220.220" AND DstIP!="208.67.222.220" AND DstIP!="208.67.220.222" SrcIP=$SrcIP$ DstIP=$DstIP$ | iplocation DstIP | stats count by SrcIP, DstIP, Country | sort - by count | head 5</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="percentagesRow">false</option> <option name="refresh.auto.interval">1180</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">false</option> </table> </panel> </row> <row> <panel> <chart> <title>Firepower Allowed Packets - 24 Hours</title> <search ref="Firepower Allowed Internal to External Packets - 24 Hours"></search> <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.abbreviation">none</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.abbreviation">auto</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.abbreviation">none</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">minmax</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisEnd</option> <option name="charting.legend.mode">standard</option> <option name="charting.legend.placement">none</option> <option name="charting.lineWidth">2</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> <panel> <chart> <title>Allowed Packets by Country DestIP Top 5</title> <search> <query>index=sourcefire sourcetype="cisco:sourcefire:appliance:syslog" AccessControlRule!=Block SrcIP="10.0.*" OR SrcIP="172.*" OR SrcIP="192.168.#.*" AND DstIP!="10.0.*" AND DstIP!="172.*" AND DstIP!="192.168.#.*" AND SrcIP!="10.0.#.#" AND DstIP!="8.8.8.8" AND DstIP!="208.67.222.222" AND DstIP!="208.67.220.220" AND DstIP!="208.67.222.220" AND DstIP!="208.67.220.222" SrcIP=$SrcIP$ DstIP=$DstIP$ | iplocation DstIP | stats count by Country | sort - by count | head 5</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>DstIP Country</title> <table> <search> <query>index=sourcefire sourcetype="cisco:sourcefire:appliance:syslog" AccessControlRule!=Block SrcIP="10.0.*" OR SrcIP="172.*" OR SrcIP="192.168.#.*" AND DstIP!="10.0.*" AND DstIP!="172.*" AND DstIP!="192.168.#.*" AND SrcIP!="10.0.#.#" AND DstIP!="8.8.8.8" AND DstIP!="208.67.222.222" AND DstIP!="208.67.220.220" AND DstIP!="208.67.222.220" AND DstIP!="208.67.220.222" SrcIP=$SrcIP$ DstIP=$DstIP$ | iplocation DstIP | stats count by DstIP, Country | sort + by Country -count | head 5000</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">false</option> </table> </panel> <panel> <title>Who is sending packets and to which Country</title> <table> <search> <query>index=sourcefire sourcetype="cisco:sourcefire:appliance:syslog" AccessControlRule!=Block SrcIP="10.0.*" OR SrcIP="172.*" OR SrcIP="192.168.#.*" AND DstIP!="10.0.*" AND DstIP!="172.*" AND DstIP!="192.168.#.*" AND SrcIP!="10.0.#.#" AND DstIP!="8.8.8.8" AND DstIP!="208.67.222.222" AND DstIP!="208.67.220.220" AND DstIP!="208.67.222.220" AND DstIP!="208.67.220.222" SrcIP=$SrcIP$ DstIP=$DstIP$ | iplocation DstIP | stats count by SrcIP, DstPort, Country | rename SrcIP to Source_IP | sort + by Country -count | head 5000</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">false</option> </table> </panel> </row> </form>
Hi Guys, I am trying to figure out how can i represent DISABLED data input which is monitoring a web URL as planned_outage in my search results. So to simplify, consider following 3 inputs whic... See more...
Hi Guys, I am trying to figure out how can i represent DISABLED data input which is monitoring a web URL as planned_outage in my search results. So to simplify, consider following 3 inputs which i am querying using below query to calculate availability and failure percentage. title= abc, title= pqr, title= xyz sourcetype=web_ping `website_monitoring_search_index`|search title IN(abc, pqr, xyz)| fillnull value=1000 response_code | eval success=case(response_code>=400, 0, timed_out == "True", 0, response_code="",0) | fillnull value=1 success |stats count as total, sum(success) as successes by title| eval availability=round(100*(successes/total),2) |eval failures=(100-'availability') | stats count by title availability failures This query seems to be working perfectly and giving desired results but when I disable any data input for planned outage, then that input completely disappears from the search results as if it is not monitored at all ..leaving us with no real-time data to represent on Dashboard. So what I am trying to figure out is, is there anyway that i can overcome this issue by modifying the query and/or creating a lookup file to compare the title's and render the disabled input as planned outage in the search results. All my efforts to achieve this so far has failed. Can someone please help me figure out if there is any possible way to achieve this. Thanks in advance.
i have a table data where in a row has 0's . i need to replace those 0 only for that row ex: rowname:data one:5 two:0 three:0 four:5 in this example i want to change 0 belonging to r... See more...
i have a table data where in a row has 0's . i need to replace those 0 only for that row ex: rowname:data one:5 two:0 three:0 four:5 in this example i want to change 0 belonging to rowname="two" to a different value, only that row. how to achieve it..
on splunk when i want to do field extraction ask me source type. and when I open this listbox show files on that path as source. Here is the logs “/opt/logs” On this path there are some other fi... See more...
on splunk when i want to do field extraction ask me source type. and when I open this listbox show files on that path as source. Here is the logs “/opt/logs” On this path there are some other files that have different structure! For example config file, database export, ... While when I import data also create specific new index for it, but this index not show on source type listbox. Now i need to do field extraction on several log file that exist on that path. Any recommendation? Thanks Thanks
Hi, I need to monitor "host failure events" per hour over last 24 hours for a group of 50 hosts. When the total reaches a threshold like 10 fails, an alert email needs to be sent. This count and... See more...
Hi, I need to monitor "host failure events" per hour over last 24 hours for a group of 50 hosts. When the total reaches a threshold like 10 fails, an alert email needs to be sent. This count and total needs to occur each hour. What I want to do is schedule a report to count the fails by each host per hour, save the count, and then add the next hourly count to the previous count. When any host reaches 10 fails within the 24 hour window, the triggered action needs to send an email. At midnight, I would like to reset the count. Any advice appreciated. Thank you
I am new to splunk and I am trying to collect AnyConnect VPN login history for my Cisco ASA 5515x. I am already getting syslog from the firewall (debugging level) and can search on syslog id 722055 t... See more...
I am new to splunk and I am trying to collect AnyConnect VPN login history for my Cisco ASA 5515x. I am already getting syslog from the firewall (debugging level) and can search on syslog id 722055 to see the individual logins. I have been collecting syslog for about a week so I was wondering if anyone else has tried this and maybe could offer some tips. Search examples, other syslog ids, etc?
I'm curious if any work is being done on the Salesforce app to make it work with Splunk 8.x?
I used a column of the stats table as a dropdown list, so the dropdown token represents a selected cell. But when I try to do a search using the token variable $tokenname$, it's not a string so I cou... See more...
I used a column of the stats table as a dropdown list, so the dropdown token represents a selected cell. But when I try to do a search using the token variable $tokenname$, it's not a string so I couldn't use it to map with where clause. Here is the field I want to compare: mvexpand metrics | spath input=metrics | rename "code" as code Code is a string jsonfield. The `where code=$tokenname$ does not work because the token is not a string.
We have a Splunk Enterprise installed in a DMZ with strict firewall rules about how to communicate with our index array. When I set up forwarding on the outputs.conf, I designated our Indexer IPs and... See more...
We have a Splunk Enterprise installed in a DMZ with strict firewall rules about how to communicate with our index array. When I set up forwarding on the outputs.conf, I designated our Indexer IPs and port 9997. [tcpout:default-autolb-group] disabled = false server = IP1:9997, IP2:9997, IP3:9997, IP4:9997 Forwarding isn't working, though. When I check the ports with the "lsof -i -P -n" command, I see that the Heavy Forwarder tries to talk to IP1 over random ports splunkd 31931 root 61u IPv4 570678 0t0 TCP [Heavy Forwarder]:41464->[IP2]:9997 (SYN_SENT) Can I force the outbound SYN to go over port 9997?
Hi Guru! I have a question about clustering architecture. Let's say there are independent 5 idx cluster sites, who have run their own idx cluster: site 1 runs idx1 cluster where there are 5 peer ... See more...
Hi Guru! I have a question about clustering architecture. Let's say there are independent 5 idx cluster sites, who have run their own idx cluster: site 1 runs idx1 cluster where there are 5 peer members, site 2 runs idx 2 cluster where there are 5 peer members, and so on. Then, the customer wants to build central site 6 where search head should be able to search from site 1 idx cluster, site 2 idx cluster, site 3 idx cluster, site 4 idx cluster, and site 5 idx cluster. Is it possible architecture? where should the cluster master of site 6 be located? If the cluster master of site 6 is not necessary, then who should be cluster master among idx cluster site 1,2,3,4 and 5?
Hi, I can't seem to find any apps on splunkbase when I'm logged in(EMEA). I have no problem with seeing apps on splunkbase when I'm not logged in. I saw there was a similar problem in 2014.
Hi - I'm struggling with the syntax of this blacklist expression and would much appreciate some guidance from anybody that can help. What I'd like to do is blacklist all events other than EventCode=4... See more...
Hi - I'm struggling with the syntax of this blacklist expression and would much appreciate some guidance from anybody that can help. What I'd like to do is blacklist all events other than EventCode=4740 which match the pattern Message=AccountName: *$". I've been trying variations on the following expression without much success: blacklist1 = EventCode="(?!4740)" Message="Account Name:[\s+](.\$)" The 'Message' expression works fine - it's the 'EventCode' negative lookahead clause that I'm struggling with: Thanks in advance for any suggestions you can provide.
Hi all, I was working with Splunk's support on an issue with my previous post. They suggested the first step should be to correct the error seen in splunkd.log with interfaces.sh seeing a virbr0 i... See more...
Hi all, I was working with Splunk's support on an issue with my previous post. They suggested the first step should be to correct the error seen in splunkd.log with interfaces.sh seeing a virbr0 interface on my CentOS machines. I temporarily disabled interfaces.sh as a stop-gap to see if it fixes the issue, but I would like to correct it permanently. The hosts are VMs, and to my understanding virbr0 is there should the host act as a hypervisor. Because we are not creating any VMs from these hosts, this interface should be redundant. Is there a way to NOT have interfaces.sh monitor the virbr0 interface? Below are the errors I am receiving. Thank you. 04-21-2020 08:38:52.939 -0400 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/interfaces.sh" cat: /sys/class/net/virbr0/duplex: Invalid argument 04-21-2020 08:38:52.951 -0400 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/interfaces.sh" cat: /sys/class/net/virbr0/speed: Invalid argument 04-21-2020 08:38:52.955 -0400 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/interfaces.sh" cat: /sys/class/net/virbr0-nic/duplex: Invalid argument 04-21-2020 08:38:52.964 -0400 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/interfaces.sh" cat: /sys/class/net/virbr0-nic/speed: Invalid argument
Hi Splunkys, i have problems to generate my PDF Report. It crashes every report generation randomly after 5 or 10 Minutes, not 3 or 7 Minutes, exact 5 or 10 Minutes... python.log 2020-04-... See more...
Hi Splunkys, i have problems to generate my PDF Report. It crashes every report generation randomly after 5 or 10 Minutes, not 3 or 7 Minutes, exact 5 or 10 Minutes... python.log 2020-04-17 10:30:01,273 +0200 INFO sendemail:1160 - sendemail pdfgen_available = 1 2020-04-17 10:30:01,274 +0200 INFO sendemail:1301 - sendemail:mail effectiveTime=1587112200 2020-04-17 10:35:02,955 +0200 ERROR sendemail:1167 - An error occurred while generating a PDF: Failed to fetch PDF (status = 400): Unable to render PDF.&lt;br/&gt;<ul><li>Exception raised while trying to prepare "File Monitor" for rendering to PDF. Timed out waiting for status to become available on job=1587112202.62</li></ul> I tried following config options... without positiv result web.conf > appServerProcessShutdownTimeout web.conf > splunkdConnectionTimeout pdf_server.conf > firefox_timeout Any idea? Greetings - Markus
Hello, everybody! I want to ask something that has already been asked several times but there is still no clear solution. My initial query gives me the set of events, each of these have child_id... See more...
Hello, everybody! I want to ask something that has already been asked several times but there is still no clear solution. My initial query gives me the set of events, each of these have child_id and parent_id fields. Sample data looks like this: child_id | parent_id ******************** null | A1 null | B1 A1 | A2 B1 | B2 A2 | C1 B2 | C1 C1 | C2 C2 | D1 C2 | E1 So, the elements on the bottom of the hierarchy has their child_id = null . The depth of parent-child relationships is not known in advance. I wonder, how can I restore the these events into the hierarchy, so if I set a specific event my search would return to me only this event and all events which are parent events? For example: If I search child_id=B2 I need to get two events for child_id=B2 (root) and child_id=B1 (1 child) as results If I search child_id=C1 I need to get five events for child_id=C1 (root) and child_id=A2 , child_id=B2 , child_id=A1 , child_id=B1 (4 childs) as results, etc. In any words, I need to get chains from the initial data: child_id | chain **************** A1 | A1 A2 | A2 -> A1 B1 | B1 B2 | B2 -> B1 C1 | C1 -> A2 -> A1 C1 | C1 -> B2 -> B1 C2 | C2 -> C1 -> A2 -> A1 C2 | C2 -> C1 -> B2 -> B1 D1 | D1 -> C2 -> C1 -> A2 -> A1 D1 | D1 -> C2 -> C1 -> B2 -> B1 E1 | E1 -> C2 -> C1 -> A2 -> A1 E1 | E1 -> C2 -> C1 -> B2 -> B1 I tried to achieve this with transaction and map but no luck at the moment. Looks like I need a kind of recursion. Is it maybe possible to implement a recursion by search macro, pointing to itself?